Research Experience
-
1981-2009
Waseda Uninersity School of Science and Engineering Professor
-
1981-
- Professor, Waseda University
-
1979-
- 米国カリフォルニア大学 訪問研究員
-
1979-
- Visiting Researcher, UCLA
-
1963-1975
三菱電機(株)中央研究所 研究員
Updated on 2025/01/20
Personnel Information
Research Activity
Contribution to Society
Updated on 2025/01/20
Waseda Uninersity School of Science and Engineering Professor
- Professor, Waseda University
- 米国カリフォルニア大学 訪問研究員
- Visiting Researcher, UCLA
三菱電機(株)中央研究所 研究員
Researcher, Mitsubishi Electric Corp.
Waseda University School of Science and Engineering
Waseda University Faculty of Science and Engineering
Waseda University School of Science and Engineering
Waseda University Faculty of Science and Engineering
電子情報通信学会 フェロー
IEEE Fellow
電子情報通信学会 基礎・境界ソサェティ副会長
情報理論とその応用学会 会長
情報システム学
情報通信工学
計算機科学
Information Transmition Engineering
Information System Engineering
Computer Science
電気通信普及財団テレコムシステム技術賞
1995
電子情報通信学会 業績賞
1993
電子情報通信学会 小林記念賞
1993
Gendo Kumoi, Hideki Yagi, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
International Journal of Neural Systems 33 ( 02 ) 2023.02
View Summary
Error-correcting output coding (ECOC) is a method for constructing a multi-valued classifier using a combination of given binary classifiers. ECOC can estimate the correct category by other binary classifiers even if the output of some binary classifiers is incorrect based on the framework of the coding theory. The code word table representing the combination of these binary classifiers is important in ECOC. ECOC is known to perform well experimentally on real data. However, the complexity of the classification problem makes it difficult to analyze the classification performance in detail. For this reason, theoretical analysis of ECOC has not been conducted. In this study, if a binary classifier outputs the estimated posterior probability with errors, then this binary classifier is said to be noisy. In contrast, if a binary classifier outputs the true posterior probability, then this binary classifier is said to be noiseless. For a theoretical analysis of ECOC, we discuss the optimality for the code word table with noiseless binary classifiers and the error rate for one with noisy binary classifiers. This evaluation result shows that the Hamming distance of the code word table is an important indicator.
Shigeichi Hirasawa, Gendo Kumoi, Hideki Yagi, Manabu Kobayashi, Masayuki Goto, Hiroshige Inazumi
2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2022.10
Learning and Estimation of Latent Structural Models Based on between-Data Metrics
Kenta Mikawa, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2022.10
Learning-State-Estimation Method Using Browsing History and Electroencephalogram During Programming Language Learning and Its Evaluation
Katsuyuki Umezawa, Tomohiko Saito, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Communications in Computer and Information Science 1344 40 - 55 2021
View Summary
The failure of learners to obtain sufficient knowledge is caused by various factors, such as the difficulty level and quality of the learning materials and learner’s prior knowledge. The use of the learner’s learning log and biological information, such as the brain waves, heart rate, and eye movements during learning, makes it possible to detect the factors. If different brain waves can be measured according to the difficulty level of task execution, the difficulty level of e-learning materials can be adjusted so that the optimum learning effect can be obtained for each student. In this study, a system that obtains the learning logs during learning has been proposed. However, the learning time is insufficient to understand the learning state of the learners. For example, if the learning time is short, whether the learning materials were too easy or too difficult to abandon cannot be determined. Therefore, we propose a system and a method for estimating the learning state of the learners by comprehensively analyzing his/her learning history and brain wave. Moreover, we evaluate the learning state of high school students learning the C and Scratch programming languages using the proposed method. Also, by comparing the estimated results with those obtained from the questionnaire administered after the experiments, we evaluate the effectiveness of our proposed method.
Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa
Advances in Intelligent Systems and Computing 31 - 41 2021
Analysis of logic errors utilizing a large amount of file history during programming learning
Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa
Proceedings of 2020 IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2020 630 - 634 2020.12
View Summary
We proposed an editing record visualization system that can confirm learner modification of programs by storing a learning log. This system was utilized for an actual flipped classroom and stored an enormous volume of learning logs. Each learning log contained all the source code being modified until the program was completed. We also developed a debugging exercise extraction system to automatically generate problems with syntax errors for debugging practice using these learning logs. In this paper, we propose a method of identifying logic errors in cases where no error information is obtained by analyzing the learning log.
Evaluation of Grouped Flipped Classrooms Compared with Three-Year Actual Classes Using a Questionnaire
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Proceedings of the 2020 IEEE International Symposium on Accreditation of Engineering and Computing Education, ICACIT 2020 2020.11
View Summary
The concept of flipped classroom, in which face-to-face lessons are conducted in school after studying at home, is gaining interest. We proposed the 'grouped flipped-classroom' approach, which divides students in a face-to-face class into three groups based on time and their degree of comprehension of learning at home. We applied the grouped flipped-classroom concept to actual classes at the Shonan Institute of Technology, Kanagawa, Japan, for three years between 2017 and 2019. The number of students who took classes was 183 in 2017, 189 in 2018, and 224 in 2019. In this study, we analyzed the findings of a questionnaire. From the results, it becomes clear that there is a correlation between those who dislike grouping and those who feel that the learning content of lesson is challenging, and students with low understanding dislike grouping.
Upper Bounds on the Error Probability for the Ensemble of Linear Block Codes with Mismatched Decoding
Toshihiro Niinomi, Hideki Yagi, Shigeichi Hirasawa
Proceedings of 2020 International Symposium on Information Theory and its Applications, ISITA 2020 151 - 155 2020.10
View Summary
In this paper, applying the technique of the DS2 bound, we derive an upper bound on the error probability of mismatched decoding with the ensemble of linear block codes, which was defined by Hof, Sason and Shamai. Assuming the ensemble of random linear block codes defined by Gallager, we show that the obtained bound is not looser than the conventional bound.
Detection of careless mistakes during programming learning using a simple electroencephalograph
Katsuyuki Umezawa, Makoto Nakazawa, Manabu Kobayashi, Yutaka Ishii, Michiko Nakano, Shigeichi Hirasawa
15th International Conference on Computer Science and Education, ICCSE 2020 72 - 77 2020.08
View Summary
There are several difficulties encountered by learners during learning such as good or bad learning content, the difficulty level of learning content, and the degree of learning proficiency. It is possible to detect these difficulties by measuring the browsing history, editing history, and biological information such as brain waves or eye-tracking information. In this paper, we measure electroencephalograph (EEG) information during programming learning. We focus on the relationship between task response time and EEG, and try to detect careless mistakes due to the lack of attention. The results show that careless mistakes during programming learning can be detected by experiments.
Decision Feedback Scheme with Criterion LR+Th for the Ensemble of Linear Block Codes.
Toshihiro Niinomi, Hideki Yagi, Shigeichi Hirasawa
IEICE Transactions 103-A ( 1 ) 334 - 345 2020 [Refereed]
Decision Feedback Scheme with Criterion LR+Th for the Ensemble of Linear Block Codes
NIINOMI Toshihiro, YAGI Hideki, HIRASAWA Shigeichi
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences 103 ( 1 ) 334 - 345 2020 [Refereed]
View Summary
<p>In decision feedback scheme, Forney's decision criterion (Forney's rule: FR) is optimal in the sense that the Neyman-Pearson's lemma is satisfied. Another prominent criterion called LR+Th was proposed by Hashimoto. Although LR+Th is suboptimal, its error exponent is shown to be asymptotically equivalent to that of FR by random coding arguments. In this paper, applying the technique of the DS2 bound, we derive an upper bound for the error probability of LR+Th for the ensemble of linear block codes. Then we can observe the new bound from two significant points of view. First, since the DS2 type bound can be expressed by the average weight distribution whose code length is finite, we can compare the error probability of FR with that of LR+Th for the fixed-length code. Second, the new bound elucidates the relation between the random coding exponents of block codes and those of linear block codes.</p>
Development of Debugging Exercise Extraction System using Learning History
Katsuyuki Umezawa, Makoto Nakazawa, Masayuki Goto, Shigeichi Hirasawa
Proceeding of the 10th The International Conference on Technology for Education (T4E 2019) 244 - 245 2019.12 [Refereed]
View Summary
We have proposed an editing history visualization system which can confirm where and how the learner modified program. We utilized this system for actual flipped classroom and stored a large amount of learning logs. This learning log contains all the source code in the process of being modified until the program is completed. We developed a debugging exercise extraction system to automatically generate problems for debugging practice from this learning log. The debugging exercise extraction tool we developed extracted 18,680 source codes (which became practice problems) that included syntactic errors that could be used as a debugging exercise from 16 weeks of program edit history data (total number is 31,562 files). The execution time was 488 seconds. Since it can be analyzed only once every six months, we believe it is a sufficiently practical execution time.
System Evaluation of Ternary Error-Correcting Output Codes for Multiclass Classification Problems*
Shigeichi Hirasawa, Gendo Kumoi, Hideki Yagi, Manabu Kobayashi, Masayuki Goto, Tetsuya Sakai, Hiroshige Inazumi
2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) 2019.10 [Refereed]
Application of grouped flipped classroom to two-year actual class and its statistical evaluation
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, and Shigeichi Hirasawa 283 - 288 2019.08 [Refereed]
View Summary
In a flipped classroom, the roles of a classroom and homework are reversed. We propose a method for increasing the effectiveness of the flipped classroom lessons based on the self-study log information. Specifically, when students study by e-learning at home, we collect and analyze their learning logs and then classify students into groups based on their study time and the degree of understanding of the material. We call our proposed method a grouped flipped classroom. We applied it to actual lessons during 16 weeks in the autumn semesters of 2017 and 2018 at the Shonan Institute of Technology. The results revealed that students’ performance improved after the grouped flipped classroom lessons, especially in the group of students who had low understanding during self-study: there was a statistically significant difference between their average scores in the tests after the self-study and after the face-to-face lessons. In addition, the average scores in the tests after the face-to-face lessons were higher for students in the grouped flipped classroom than for students in conventional style classes (lecture style class and mixed ability class).
Understanding of Self-Study in a Grouped Flipped Classroom
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Proceedings of the 15th International Conference on Education, Technology, E-Learning & Society (ICE19Hong Kong Conference) 1 - 8 2019.08 [Refereed]
View Summary
We previously proposed a method for an effective flipped classroom on the basis of the self-study log information of students. We called this a "grouped flipped classroom." We applied the grouped flipped classroom method to actual lessons in 2017 and 2018. The results revealed that the grouped flipped classroom improved students’ performance. In this paper, we examine the issues regarding the method of self-study achievement test (SSAT) when applied to actual lessons in 2018. Specifically, we compare the scores of the SSATs that were conducted with and without supervision.
Shigeichi Hirasawa, Hideki Yagi, Manabu Kobayashi, Masao Kasahara
2019 53rd Annual Conference on Information Sciences and Systems (CISS) 2019.03 [Refereed]
Linear Programming Bounds for Multi-level Unequal Protection Codes
Tomohiko Saito, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018 2917 - 2922 2019.01
View Summary
In coding theory, it is important to find upper bounds for the code size given a code length and minimum distance. The Hamming bounds and Linear Programming (LP) bounds were proposed in previous works. On the other hand, Masnick et al. proposed Unequal Error Protection (UEP) codes and modified Hamming bounds as upper bounds for the code size of UEP codes. In our previous work, we defined 2-level UEP codes as a subclass of UEP codes, and derived LP bounds for 2-level UEP codes. In this paper, we define multi-level UEP codes by extending 2-level UEP codes, and derive LP bounds for multi-level UEP codes. Moreover, we show that LP bounds for UEP codes are tighter upper bound than modified Hamming bounds.
Probabilistic fault diagnosis and its analysis in multicomputer systems
Manabu Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101A ( 12 ) 2072 - 2081 2018.12
View Summary
F.P. Preparata et al. have proposed a fault diagnosis model to find all faulty units in the multicomputer system by using outcomes which each unit tests some other units. In this paper, for probabilistic diagnosis models, we show an efficient diagnosis algorithm to obtain a posteriori probability that each of units is faulty given the test outcomes. Furthermore, we propose a method to analyze the diagnostic error probability of this algorithm.
Application and Evaluation of a Grouped Flipped Classroom Method
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
The IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE2018) 39 - 45 2018.12 [Refereed]
View Summary
We proposed a new flipped classroom method in which students are divided into three groups before each class on the basis of their e-learning self-study logs and level of understanding. The three groups are students who studied the lesson and fully understand the contents, students who studied the lesson but do not fully understand the contents, and students who did not study the lesson and so do not understand the contents. The face-to-face learning in class is done separately for each group. We called this a “grouped flipped classroom.” We applied the above grouped flipped classroom method to 16 weeks of actual lessons in the autumn semester of 2017 at our university. The results revealed that the grouped flipped classroom improved students’ performance. In particular, for the group of students who had low understanding during self-study, there was a statistically significant difference between their average scores in the tests after the self-study and after the face-to-face lesson. In addition, the average score in the test after the face-to-face lesson was higher for students in the grouped flipped classroom than for students in conventional style classes (lecture style class and mixed ability class).
Evaluation by Questionnaire on Grouped Flipped Classroom Method
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
IEEE 10th International Conference on Engineering Education (ICEED2018) 87 - 92 2018.11 [Refereed]
View Summary
Flipped classrooms, in which the roles of a classroom lesson and homework are reversed, have recently begun to attract attention. Specifically, the students study on their own by using digital teaching materials or e-learning prior to school hours and apply their learning mainly in classroom discussions. We proposed a method for an effective flipped classroom based on the log information of self-study called a "grouped flipped classroom." We tested the grouped flipped classroom method over 16 weeks of actual lessons in the autumn semester of 2017 at Shonan Institute of Technology. The results revealed that the grouped flipped classroom improved students ’performance. In this paper, we evaluate the questionnaire results collected when applying the grouped flipped classroom to actual lessons.<br />
On the DS2 bound for forney’s generalized decoding using non-binary linear block codes
Toshihiro Niinomi, Hideki Yagi, Shigeichi Hirasawa
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E101A ( 8 ) 1223 - 1234 2018.08
View Summary
Recently, Hof et al. extended the type-2 Duman and Salehi (DS2) bound to generalized decoding, which was introduced by Forney, with decision criterion FR. From this bound, they derived two significant bounds. One is the Shulman-Feder bound for generalized decoding (GD) with the binary-input output-symmetric channel. The other is an upper bound for an ensemble of linear block codes, by applying the average complete weight distribution directly to the DS2 bound for GD. For the Shulman-Feder bound for GD, the authors derived a condition under which an upper bound is minimized at an intermediate step and show that this condition yields a new bound which is tighter than Hof et al.’s bound. In this paper, we first extend this result for non-binary linear block codes used over a class of symmetric channels called the regular channel. Next, we derive a new tighter bound for an ensemble of linear block codes, which is based on the average weight distribution.
Application and Evaluation of Grouped Flipped Classroom Method to Real Classes
Katsuyuki Umezawa, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
the International Conference on Engineering, Technology, and Applied Science (ICETA2018) 99 - 99 2018.06 [Refereed]
Katsuyuki Umezawa, Tomohiko Saito, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Trends and Advances in Information Systems and Technologies - Volume 2 [WorldCIST'18, Naples, Italy, March 27-29, 2018] 1307 - 1316 2018.03 [Refereed]
View Summary
Various factors affect misstep learning, such as the quality and difficulty of the learning contents and the learner's proficiency level. A system for acquiring the browsing history at the time of learning has been proposed. However, it may be insufficient to refer only to the learner's browsing time. For example, when the browsing time is short, it can not be determined whether the learning contents were too easy for the learner or whether learning was abandoned because the learning contents were too difficult. Therefore, in this paper, we propose a method of determining the learning state of learners by simultaneously analyzing learning history information and brain wave information, not using history information and brain wave information individually. And we will show that the learning state for each learner will be able to be successfully estimated by our proposed method.
Distance metric learning using each category centroid with nuclear norm regularization
Kenta Mikawa, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings 2018- 1 - 5 2018.02 [Refereed]
View Summary
The development in information technology has resulted in more diverse data characteristics and a larger data scale. Therefore, pattern recognition techniques have received significant interest in various fields. In this study, we focus on a pattern recognition technique based on distance metric learning, which is known as the learning method in metric matrix under an arbitrary constraint from the training data. This method can acquire the distance structure, which takes account of the statistical characteristics of the training data. Most distance metric learning methods estimate the metric matrix from pairs of training data. One of the problem of the distance metric learning is that the computational complexity for prediction (i. e. distance calculation) is relatively high especially when the dimension of input data becomes large. To calculate the distance effectively, we propose the way to derive low rank metric matrix with nuclear norm regularization. When solving the optimization problem, we use the alternating direction method of multiplier and proximal gradient. To verify the effectiveness of our proposed method from the viewpoint of classification accuracy and rank reduction, simulation experiments using benchmark data sets are conducted.
System evaluation of construction methods for multi-class problems using binary classifiers
Shigeichi Hirasawa, Gendo Kumoi, Manabu Kobayashi, Masayuki Goto, Hiroshige Inazumi
Advances in Intelligent Systems and Computing 746 909 - 919 2018 [Refereed]
View Summary
Construction methods for multi-valued classification (multi-class) systems using binary classifiers are discussed and evaluated by a trade-off model for system evaluation based on rate-distortion theory. Suppose the multi-class systems consisted of M(≥3) categories and N(≥M-1) binary classifiers, then they can be represented by a matrix W, where the matrix W is given by a table of M code words with length N, called a code word table. For a document classification task, the relationship between the probability of classification error Pe and the number of binary classifiers N for given M is investigated, and we show that our constructed systems satisfy desirable properties such as “Flexible”, and “Elastic”. In particular, modified Reed Muller codes perform well: they are shown to be “Effective elastic”. As a second application we consider a hand-written character recognition task, and we show that the desirable properties are also satisfied.
Decision Feedback Scheme with Criterion LR+Th for the Ensemble of Linear Block Codes.
Toshihiro Niinomi, Hideki Yagi, Shigeichi Hirasawa
International Symposium on Information Theory and Its Applications(ISITA) 261 - 265 2018 [Refereed]
Use of Student Grouping to Make Flipped Classroom More Effective
Katsuyuki Umezawa, Manabu Kobayashi, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
Proceeding of the 16th Hawaii International Conference on Education 1249 - 1250 2018.01 [Refereed]
View Summary
Flipped classroom, i.e., the reversal of the roles of the classroom and home study, has been attracting increased attention due to the expected improvement in learning. In the flipped classroom methodology, students study the lesson before coming to class and then obtain more advanced face-to-face learning in class. We have developed and evaluated a method to make classroom flipping more effective.In our proposed method, students are divided into three groups before each class on the basis of their e-learning self-study logs and level of understanding. The three groups are students who studied the lesson and understand the contents, students with poor understanding even though they thoroughly studied the lesson, and students who do not understand because they did not study the lesson. The face-to-face learning in class is done separately for each group. We compared our proposed method with traditional classroom flipping without grouping by testing using 70 students in a system engineering class on information theory at the Shonan Institute of Technology [1]. We found that our method better improved the level of understanding of the students with poor understanding and students who did not study the lesson. Furthermore, it also improved the level of understanding of the students who studied the lesson and understood the contents. The grouping was done manually and took much time and effort. Manual grouping is thus impractical for weekly classes. We therefore developed a tool to automatically group students so that it can be practically implemented. This tool is designed to automatically divide the students into three groups and graphically display them on the screen by giving the logs of the learning time of self-study and the achievement test score after self-study. This tool can also regroup automatically grouped students with the mouse. In the presentation, we will show outlines and effectiveness of our proposed flipped classroom method, and future plans.<br />
A Visualization System of the Contribution of Learners in Software Development PBL Using GitHub.
Yutsuki Miyashita, Atsuo Hazeyama, Hiroaki Hashiura, Masayuki Goto, Shigeichi Hirasawa
25th Asia-Pacific Software Engineering Conference, APSEC 2018, Nara, Japan, December 4-7, 2018 695 - 696 2018 [Refereed]
Collaborative Filtering Based on the Latent Class Model for Attributes
Manabu Kobayashi, Kenta Mikawa, Masayuki Goto, Toshiyasu Matsushima, Shigeichi Hirasawa
( 893 ) 896 - 201712 2017.12 [Refereed]
Collaborative filtering analysis of consumption behavior based on the latent class model
Manabu Kobayashi, Kenta Mikawa, Masayuki Goto, Shigeichi Hirasawa
2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017 2017- 1926 - 1931 2017.11 [Refereed]
View Summary
In this manuscript, we investigate a collaborative filtering method to characterize consumption behavior (or evaluation) of customers (or users) and services (or items) for marketing. Assuming that each customer and service have the invisible attribute, which is called latent class, we propose a new Bayesian statistical model that consumption behavior is probabilistically arise based on a latent class combination of a customer and service. Then, we show the method to estimate parameters of a statistical model based on the variational Bayes method and the mean field approximation. Consequently, we show the effectiveness of the proposed model and the estimation method by simulation and analyzing actual data.
2-レベル不均一誤り訂正符号の線形計画限界
斎藤友彦, 新家稔央, 浮田善文, 松嶋敏泰, 平澤茂一
電子情報学会論文誌 A Vol.J100-A ( 9 ) 316 - 324 2017.09 [Refereed]
An Electroencephalograph-Based Method for Judging the Difficulty of a Task Given to a Learner.
Katsuyuki Umezawa, Tomohiko Saito, Takashi Ishida, Makoto Nakazawa, Shigeichi Hirasawa
17th IEEE International Conference on Advanced Learning Technologies, ICALT 2017, Timisoara, Romania, July 3-7, 2017 384 - 386 2017 [Refereed]
View Summary
Various factors affect learning, such as the quality and difficulty of the learning contents and the learner’s proficiency, and it may be possible to detect their effects by using the learner’s browsing and edit history as well as biological information such as brain waves and eye movement. We investigated the use of brain waves to estimate the degree of difficulty of a learning task. The first experiment using a simple typing test confirmed a previous finding that the β-wave to α-wave ratio increases with task difficulty. Furthermore, the low-β-wave to low-α-wave ratio, where “low” means low frequency, observed under various learning conditions was found to depend on the frequencies of the waves, and the value of the ratio was shown to represent task difficulty. The second experiment in which the change in brain waves was measured using a simple electroencephalograph (EEG) as the examinees became used to the task showed that the values for the examinees who reported that the task was easy fell gradually, supporting our finding that the value of the ratio represents task difficulty. This information could be used to dynamically adjust task difficulty and thereby optimize the learning effect.
Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channel
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E99A ( 12 ) 2170 - 2178 2016.12 [Refereed]
View Summary
In this study, we develop a new algorithm for decoding binary linear codes for symbol-pair read channels. The symbol-pair read channel was recently introduced by Cassuto and Blaum to model channels with higher write resolutions than read resolutions. The proposed decoding algorithm is based on linear programming (LP). For LDPC codes, the proposed algorithm runs in time polynomial in the codeword length. It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance d(fp) of the code, which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to. inverted left perpendiculard(fp)/2inverted right perpendicular - 1 errors.
A method based on self-study log information for improving effectiveness of classroom component in flipped classroom approach
Katsuyuki Umezawa, Takashi Ishida, Michitaka Aramoto, Manabu Kobayashi, Makoto Nakazawa, Shigeichi Hirasawa
Blended Learning: Concepts, Methodologies, Tools, and Applications 4 1818 - 1834 2016.08 [Refereed]
View Summary
The flipped classroom approach has recently begun to attract attention. In a flipped classroom, the conventional oles of classroom and homework are reversed: students study on their own using digital teaching materials or e-learning prior to class and then apply their learning in classroom activities. The authors have developed a method for improving the effectiveness of the classroom component: the students in a class are grouped on the basis of the time they spent studying (as recorded in their self-study logs) and their degree of understanding (as revealed by a self-study achievement test), and a different learning model is used for each group to improve their degree of understanding. Although they were unable to find a meaningful statistical difference in the test scores obtained in an experiment using one class of 34 students, there was a notable difference in the way questions were answered. The results of a free-description questionnaire indicate that the group learning encourages active learning.
Katsuyuki Umezawa, Takashi Ishida, Michitaka Aramoto, Manabu Kobayashi, Makoto Nakazawa, Shigeichi Hirasawa
IJSI 4 ( 2 ) 17 - 32 2016.04 [Refereed]
View Summary
The flipped classroom approach has recently begun to attract attention. In a flipped classroom, the conventional roles of classroom and homework are reversed: students study on their own using digital teaching materials or e-learning prior to class and then apply their learning in classroom activities. We have developed a method for improving the effectiveness of the classroom component: the students in a class are grouped on the basis of the time they spent studying (as recorded in their self-study logs) and their degree of understanding (as revealed by a self-study achievement test), and a different learning model is used for each group to improve their degree of understanding. Although we were unable to find a meaningful statistical difference in the test scores obtained in an experiment using one class of 34 students, there was a notable difference in the way questions were answered. The results of a free-description questionnaire indicate that the group learning encourages active learning.
Development of Learning History Visualization Systems for the Learning Analytics
Sato Kazuhiro, Aramoto Michitaka, Nakazawa Makoto, Kobayasi Manabu, Nakano Michiko, Goto Masayuki, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2016 349 - 352 2016
View Summary
Two systems have been developed for learning analytics, a generic browsing history visualization system, and an editing history visualization system for programming courses.<br><br>In the former system, a teacher can analyze the detail viewing logs of students (e.g. the referred pages and the duration to solve the given assignment) to understand the learning situation correspondingly, thus leading to the improvement of the teaching materials and the way of teaching.<br><br>While in the latter system, using the student's editing log provided through the newly developed browser-based programming environment, a teacher can analyze the detail programming histories of students (e.g. programming errors, the correction detail to solve them), to visualize the learning achievement of the students and to support them more appropriately.
A System Evaluation for Construction Methods of Multiclass Problems using Binary Classifiers
Hirasawa Shigeichi, Kumoi Gendo, Kobayashi Manabu, Goto Masayuki, Inazumi Hiroshige
Abstracts of Annual Conference of Japan Society for Management Information 2016 13 - 16 2016
View Summary
Construction methods of the multiple classification systems using binary classifiers are discussed and evaluated by the system evaluation model based on rate-distortion functions. Suppose the multiclass problems constructed by M (M≧3) categories and N (N≧M-1) discriminators, then they can be solved by the matrices W , where the matrices W are given by the table of M code words with length N. Applying the bench-mark test data (Newspaper articles of 2000 Yomiuri Shinbun), the relationships between the classification error Pe and the number of the binary discriminator N for a given M are investigated, and the systems have desirable properties such as "Flexible", "Elastic", and so on.
Collection and analysis of the browsing history, editing history, and biological information when high school students are learning
Umezawa Katsuyuki, Nakazawa Makoto, Ishida Takashi, Saito Tomohiko, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2016 210 - 213 2016
View Summary
There are many causes of failure in study such as quality and difficulty of learning content, and learning proficiency. We think that we can detect such causes by measuring a browsing history, edit history, and biological information such as brain wave or eye tracking information. We think this allows we can do automatic presentation suited the intelligibility and evaluate the validity grade of the teaching materials. To analyze above mentioned learning process, we held a summer science school with tens of high school students and collect browsing history, edit history, and biological information. In this paper, we describe outline of a summer science school and show the part of the result of analysis of collected data.
Distance metric learning based on different ℓ1 regularized metric matrices in each category.
Kenta Mikawa, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
285 - 289 2016
A Note on Support Recovery of Sparse Signals using Linear Programming
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
PROCEEDINGS OF 2016 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS (ISITA 2016) 270 - 274 2016 [Refereed]
View Summary
A new theory known as compressed sensing considers the problem to acquire and recover a sparse signal from its linear measurements. In this paper, we propose a new support recovery algorithm from noisy measurements based on the linear programming (LP). LP is widely used to estimate sparse signals, however, we focus on the problem to recover the support of sparse signals rather than the problem to estimate sparse signals themselves. First, we derive an integer linear programming (ILP) formulation for the support recovery problem. Then we obtain the LP based support recovery algorithm by relaxing the ILP. The proposed LP based recovery algorithm has an attracting property that the output of the algorithm is guaranteed to be the maximum a posteiori (MAP) estimate when it is integer valued. We compare the performance of the proposed algorithm to a state-of-the-art algorithm named sparse matching pursuit (SMP) via numerical simulations.
Linear Programming Decoding of Binary Linear Codes for Symbol-Pair Read Channels
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
2016 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY 1944 - 1948 2016 [Refereed]
View Summary
In this paper, we develop a new decoding algorithm of binary linear codes for symbol-pair read channel. The Symbol-pair read channel has recently been introduced by Cassuto and Blaum to model channel whose write resolution is higher than read resolution. The proposed decoding algorithm is based on the linear programming (LP). It is proved that the proposed LP decoder has the maximum-likelihood (ML) certificate property, i.e., the output of the decoder is guaranteed to be the ML codeword when it is integral. We also introduce the fractional pair distance d(fp) of the code which is a lower bound on the minimum pair distance. It is proved that the proposed LP decoder corrects up to [d(fp)/2] - 1 errors.
A Bayes Prediction Algorithm for Model Class Composed of Several Subclasses
Masayuki Goto, Manabu Kobayashi, Kenta Mikawa, Shigeichi Hirasawa
PROCEEDINGS OF 2016 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS (ISITA 2016) 121 - 125 2016 [Refereed]
View Summary
The Bayes coding algorithm for the tree model class is an effective method calculating the prediction probability of appearing symbol at the next time point from the past data under the Bayes criterion. The Bayes optimal prediction is given by the mixture of all models in a given model class, and the Bayes coding algorithm gives an efficient way to calculate a coding probability. This algorithm is applicable to a general prediction problem with Time-series data. Although the Bayes coding algorithm assumes a class of Markov sources, other model classes can be useful for a real prediction problem in practice. For example, the data at the next time point may not always depend on the strict sequence of the past data. It can be possible to construct an efficient Bayes prediction algorithm for a model class on which the probability of the next symbol is conditioned by the cumulative number in a past data sequence. However, there is usually no way to previously know which model class is the best for the observed data sequence. This paper considers the method to mix the prediction probabilities given by the mixtures on different model subclass. If each calculation of the mixtures on subclasses is efficient, the proposed method is also sufficiently efficient. Based on the asymptotic analysis, we evaluate the prediction performance of the proposed method.
A new latent class model for analysis of purchasing and browsing histories on EC sites
Masayuki Goto, Kenta Mikawa, Shigeichi Hirasawa, Manabu Kobayashi, Tota Suko, Shunsuke Horii
Industrial Engineering and Management Systems 14 ( 4 ) 335 - 346 2015.12 [Refereed]
View Summary
The electronic commerce site (EC site) has become an important marketing channel where consumers can purchase many kinds of products
their access logs, including purchase records and browsing histories, are saved in the EC sites' databases. These log data can be utilized for the purpose of web marketing. The customers who purchase many product items are good customers, whereas the other customers, who do not purchase many items, must not be good customers even if they browse many items. If the attributes of good customers and those of other customers are clarified, such information is valuable as input for making a new marketing strategy. Regarding the product items, the characteristics of good items that are bought by many users are valuable information. It is necessary to construct a method to efficiently analyze such characteristics. This paper proposes a new latent class model to analyze both purchasing and browsing histories to make latent item and user clusters. By applying the proposal, an example of data analysis on an EC site is demonstrated. Through the clusters obtained by the proposed latent class model and the classification rule by the decision tree model, new findings are extracted from the data of purchasing and browsing histories.
On the Blended Learning of Statistical Education with Digital Textbooks and Worksheets
Daiki Koizumi, Tota Suko, Shigeichi Hirasawa
Proceeding of the 77th National Convention of IPSJ 4-605 - 4-606 2015.03
An Investigation into Reading Process in English by Making use of Detailed Learning Logs - (3) Learner Model in the Reading Process
Makoto Nakazawa, Katsuyuki Umezawa, Manabu Kobayashi, Daiki Koizumi, Masayuki Goto, Shigeichi Hirasawa
Proceeding of the 77th National Convention of IPSJ 4-505 - 4-506 2015.03
A Study of Distance Metric Learning by Considering the Distances between Category Centroids
Kenta Mikawa, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS 1645 - 1650 2015 [Refereed]
View Summary
In this paper, we focus on pattern recognition based on the vector space model. As one of the methods, distance metric learning is known for the learning metric matrix under the arbitrary constraint. Generally, it uses iterative optimization procedure in order to gain suitable distance structure by considering the statistical characteristics of training data. Most of the distance metric learning methods estimate suitable metric matrix from all pairs of training data. However, the computational cost is considerable if the number of training data increases in this setting. To avoid this problem, we propose the way of learning distance metric by using the each category centroid. To verify the effectiveness of proposed method, we conduct the simulation experiment by using benchmark data.
An Effective Flipped Classroom based on Log Information of Self-study
Umezawa Katsuyuki, Aramoto Michitaka, Kobayashi Manabu, Ishida Takashi, Nakazawa Makoto, Hirasawa Shigeichi
3RD INTERNATIONAL CONFERENCE ON APPLIED COMPUTING AND INFORMATION TECHNOLOGY (ACIT 2015) 2ND INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND INTELLIGENCE (CSI 2015) 248 - 253 2015 [Refereed]
View Summary
Flipped classrooms have recently begun to attract attention. In a flipped classroom, the roles of a classroom and homework are reversed. Specifically, the students study on their own by using digital teaching materials or e-learning prior to a school hours and apply their learning mainly in the classroom discussions. In this paper, we propose a method for an effective flipped classroom based on the log information of the self-study. Specifically, when students study by e-learning at home, we collect learning logs. We classify students into groups on the basis of their study time and degree of understanding by analyzing the learning logs. We can improve the degree of understanding of the students by conducting a discussion and/or giving a presentation to each group in the classroom.
Tota Suko, Shunsuke Horii, Manabu Kobayashi, Masayuki Goto, Toshiyasu Matsushima, Shigeichi Hirasawa
日本経営工学会論文誌 65 ( 2 ) 78 - 88 2014.07 [Refereed]
View Summary
In this paper, we study a privacy preserving linear regression analysis. We propose a new protocol of a distributed calculation method that calculates a least squares estimator, in the case that two parties have different types of explanatory variables. We show the security of privacy in the proposed protocol. Because the protocol have iterative calculations, we evaluate the number of iterations via numerical experiments. Finally, we show an extended protocol that is a distributed calculation method for k parties.
Experimental Development and Evaluation of Statistical Education Digital Textbook for Tablet Devices
Daiki Koizumi, Tota Suko, Shigeichi Hirasawa
Proceeding of the 76th National Convention of IPSJ 4-361 - 4-362 2014.03
The e-learning materials production supporting system based on the existing PDF file
Michitaka Aramoto, Daiki Koizumi, Tota Suko, Shigeichi Hirasawa
Proceeding of the 76th National Convention of IPSJ 4-359 - 4-360 2014.03
An Analysis of Learner Behavior Using Advanced Learning History
Makoto Nakazawa, Daiki Koizumi, Masayuki Goto, Shigeichi Hirasawa
Proceeding of the 76th National Convention of IPSJ 4-357 - 4-358 2014.03
A Proposal of l(1) Regularized Distance Metric Learning for High Dimensional Sparse Vector Space
Kenta Mikawa, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC) 1985 - 1990 2014 [Refereed]
View Summary
In this paper, we focus on pattern recognition based on the vector space model with the high dimensional and sparse data. One of the pattern recognition methods is metric learning which learns a metric matrix by using the iterative optimization procedure. However most of the metric learning methods tend to cause overfitting and increasing computational time for high dimensional and sparse settings. To avoid these problems, we propose the method of l(1) regularized metric learning by using the algorithm of alternating direction method of multiplier (ADMM) in the supervised setting. The effectiveness of our proposed method is clarified by classification experiments by using the Japanese newspaper article and UCI machine learning repository. And we show proposed method is the special case of the statistical sparse covariance selection.
A Modified Aspect Model for Simulation Analysis
Masayuki Goto, Kazushi Minetoma, Kenta Mikawa, Manabu Kobayashi, Shigeichi Hirasawa
2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC) 1306 - 1311 2014 [Refereed]
View Summary
This paper proposes a new latent class model to represent user segments in a marketing model of electric commerce sites. The aspect model proposed by T. Hofmann is well known and is also called the probabilistic latent semantic indexing (PLSI) model. Although the aspect model is one of effective models for information retrieval, it is difficult to interpret the meaning of the probability of latent class in terms of marketing models. It is desirable that the probability of latent class means the size of customer segment for the purpose of marketing research. Through this formulation, the simulation analysis to dissect the several situations become possible by using the estimated model. The impact of the strategy that we contact to the specific customer segment and make effort to increase the number of customers belonging to this segment can be predicted by using the model demonstrating the size of customer segment. This paper proposes a new model whose probability parameter of latent variable means the rate of users with the same preference in market. By applying the proposed model to the data of an internet portal site for job hunting, the effectiveness of our proposal is verified.
Robustness of Syndrome Analysis Method in Highly Structured Fault-Diagnosis Systems
Manabu Kobayashi, Masayuki Goto, Toshiyasu Matsushima, Shigeichi Hirasawa
2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC) 2757 - 2763 2014 [Refereed]
View Summary
F. P. Preparata et al. proposed a fault diagnosis model (PMC model) to find all fault units in the multicomputer system by using outcomes that each unit tests some other units. T. Kohda proposed a highly structured(HS) system and the syndrome analysis method(SAM) to diagnose from local testing results. In this paper, we introduce the maximum a posteriori probability algorithm(MAPDA) for the HS system in the probabilistic fault model. Analyzing the MAPDA, we show that the SAM is closer to the MAPDA as the fault probability becomes smaller. Finally, we show the robustness of the SAM in the HS system.
A Note on the Correlated Multiple Matrix Completion based on the Convex Optimization Method
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC) 1618 - 1623 2014 [Refereed]
View Summary
In this paper, we consider a completion problem of multiple related matrices. Matrix completion problem is the problem to estimate unobserved elements of the matrix from observed elements. It has many applications such as collaborative filtering, computer vision, biology, and so on. In cases where we can obtain some related matrices, we can expect that their simultaneous completion has better performance than completing each matrix independently. Collective matrix factorization is a powerful approach to jointly factorize multiple matrices. However, existing completion algorithms for the collective matrix factorization have some drawbacks. One is that most existing algorithms are based on non-convex formulations of the problem. Another is that only a few existing algorithms consider the strength of the relation among matrices and it results in worse performance when some matrices are actually not related. In this paper, we formulate the multiple matrix completion problem as the convex optimization problem. Moreover, it considers the strength of the relation among matrices. We also develop an optimization algorithm which solves the proposed problem efficiently based on the alternating direction method of multipliers (ADMM). We verify the effectiveness of our approach through numerical experiments on both synthetic data and real data set: MovieLens.
Learning Styles for e-learning Systems over Virtual Desktop Infrastructure
Shigeichi Hirasawa, Makoto Nakazawa, Daiki Koizum, Tomoko Kondo
2014 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC) 3241 - 3246 2014 [Refereed]
View Summary
By introducing the technology of the virtual desktop infrastructure (VDI) to e-learning systems, the identical learning environment can be provided to the learner at any time, and at any place, and also even if the learner stops learning or working, he can restart it by different terminals including different operating systems. On the other hand, the quality of screen images on the desktop would be affected by the quality of service (QoS) of the network, since the screen images are transferred to the client from the server by their transfer protocol. In this paper, we discuss the influence of the QoS upon the learning styles caused by the usability over the VDI. By constructing the experimental network using network emulators, we evaluate and discuss the influence of the QoS such as a round trip delay time (DT) and a bandwidth (BW) upon learning style measured by the quality of experience (QoE). As a result, we clarify the relationships between the QoS of the network and the QoE of the e-learning styles.
The Empirical Bayes Forecasting of Web Traffic on the Non-Stationary Poisson Distribution
Daiki Koizumi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of the 36th Symposium on Information Theory and its Applications (SITA2013) 432 - 436 2013.11
e-learningにおける学習 スタイル ― 協働学習と学習ログ分析
中澤 真, 小泉 大城, 石田 崇, 後藤 正幸, 平澤 茂一
日本経営工学会 平成25年度秋季大会予稿集 66 - 67 2013.11
e-learningにおける学習 スタイル ― ネットワーク協働学習とオンデマンド授業
平澤 茂一, 中澤 真, 小泉 大城, 石田 崇, 後藤 正幸
日本経営工学会 平成25年度秋季大会予稿集 64 - 65 2013.11
A Study on Learning Style in e-learning
Shigeichi Hirasawa, Masayuki Goto, Makoto Nakazawa, Takashi Ishida, Daiki Koizumi
National Conference of the Japan Society for Management Information (JASMIN) 2013 Autumn 41 - 44 2013.10
Study on Appropriate Parameter Setting of Multimedia for e-Learning based on QoS
Makoto Nakazawa, Daiki Koizumi, Shigeichi Hirasawa
Proceeding of the 75th National Convention of IPSJ 4-477 - 4-478 2013.03
Experimental Development of Digital Textbook for University Education: Introduction to Statistics for Tablet Devices
Daiki Koizumi, Tota Suko, Shigeichi Hirasawa
Proceeding of the 75th National Convention of IPSJ 4-467 - 4-468 2013.03
The Influence of Network QoS on Cross-Cultural Distance Learning (CCDL) of Waseda University
Makoto Nakazawa, Daiki Koizumi, Yusuke Kondo, Michiko Nakano, Shigeichi Hirasawa
Proceeding of the 75th National Convention of IPSJ 4-401 - 4-402 2013.03
On the Subjective Evaluation of Cross-Cultural Distance Learning (CCDL) in terms of Sound Quality, Latency, and Picture
Daiki Koizumi, Makoto Nakazawa, Yusuke Kondo, Michiko Nakano, Shigeichi Hirasawa
Proceeding of the 75th National Convention of IPSJ 4-399 - 4-400 2013.03
On the Learning Effect of Cross-Cultural Distance Learning (CCDL): Do the classes raise the students' social skills?
Michiko Nakano, Daiki Koizumi, Shigeichi Hirasawa, Yusuke Kondo
Proceeding of the 75th National Convention of IPSJ 4-397 - 4-398 2013.03
On the Learning Effect of Cross-Cultural Distance Learning (CCDL): Do the classes raise the students' motivation?
Michiko Nakano, Daiki Koizumi, Shigeichi Hirasawa, Yusuke Kondo
Proceeding of the 75th National Convention of IPSJ 4-395 - 4-396 2013.03
Iterative multiuser joint decoding based on ADMM
S. Horii, T. Suko, T. Matsushima, S. Hirasawa
2013 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2013 - Proceedings 1097 - 1100 2013
View Summary
In this paper, we develop an iterative multiuser joint decoding of code-division multiple-access (CDMA) signals based on a distributed optimization algorithm. For the joint decoding problem, decoding algorithm based on the turbo principle is widely used. The algorithm consists of soft-input soft-output (SISO) channel decoder and SISO multiuser detector and it can be derived as an application of the sum-product algorithm. On the other hand, in the research area of error correcting codes, the decoding algorithm based on convex optimization has been attracting a great deal of attention. Decoding algorithm based on linear programming (LP) has decoding error rate which is comparable with sum-product algorithm with stronger theoretical guarantees. We formulate the joint decoding problem of CDMA signals as an convex optimization problem and we present a relax form of the problem. Moreover, we propose a distributed algorithm which efficiently solves the relaxed optimization problem. The proposed algorithm is based on alternating direction method of multipliers (ADMM). We also see the performance of the proposed decoder through numerical simulations. © 2013 IEEE.
早稲田大学のCCDL (Cross-Cultural Distance Learning) 授業におけるネットワーク通信品質(QoS)の影響とその学習効果について
中野 美知子, 中澤 真, 小泉 大城, 近藤 悠介, 平澤 茂一
Information Communication Technology Practice & Research 2012. なし 67 - 8 2013 [Refereed]
Design and development of prototype interactive educational materials for a class in university
Ishida Takashi, Kobayashi Manabu, Umezawa Katsuyuki, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2013 298 - 301 2013
View Summary
In this study, we discuss the design and development of prototype interactive educational materials for a class of mathematical information science in university. These materials are mainly assumed to be used for learning by oneself. In addition, we assume that (1) these materials are used on several types of devices, (2) complex mathematical formula can be handled, and (3) the developing tools are as usual as possible. We have actually developed the prototype materials and used them in the class. The effectiveness of them and the contents in the development are discussed.
The Influence of QoS on e-Learning Environment under Virtual Desktop Infrastructure
Makoto Nakazawa, Daiki Koizumi, Shigeichi Hirasawa
Proc. in the 5th International Conference on Communications, Computers and Applications (MIC-CCA2012) 174 - 178 2012.10 [Refereed]
A Learning Algorithm of Threshold Value on the Automatic Detection of SQL Injection Attack
Daiki Koizumi, Takeshi Matsuda, Michio Sonoda, Shigeichi Hirasawa
Proceeding of The 2012 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'12) 2 933 - 937 2012.07 [Refereed]
Markov Chain Monte Carlo Method Simulation of SQL Injection Attack Detection
Michio Sonoda, Takeshi Matsuda, Daiki Koizumi, Shigeichi Hirasawa
Proceeding of the 11th World Meeting for the International Society for Bayesian Analysis (ISBA) 2012 155 - 155 2012.06 [Refereed]
Predictive Distribution of SQL Injection Attacks Detection Model
Takeshi Matsuda, Michio Sonoda, Daiki Koizumi, Shigeichi Hirasawa
Proceeding of the 11th World Meeting for the International Society for Bayesian Analysis (ISBA) 2012 133 - 133 2012.06 [Refereed]
モバイル技術を利用した英単語・読解練習教材実験と仮想化デスクトップを用いたeラーニング実験
中野 美知子, 近藤 悠介, 平澤 茂一, 小泉 大城, 斉藤友彦, 永間広宣, 黒田学, 神馬豊彦
大学英語教育学会(JACET) ICT調査研究特別委員会 2011年度ICT授業実践報告書 99 - 113 2012.03
e-Learning using the virtual desktop - The Influence of QoS and Terminals
Makoto Nakazawa, Daiki Koizumi, Katsuyuki Umezawa, Shigeichi Hirasawa
Proceeding of the 74th National Convention of IPSJ 4-481 - 4-482 2012.03
On e-Learning under the Virtual Desktop Infrastructure: long distance network access from abroad
Daiki Koizumi, Takeshi Matsuda, Makoto Nakazawa, Shigeichi Hirasawa
Proceeding of the 74th National Convention of IPSJ 4-479 - 4-480 2012.03
e-Learning on Virtual Desktop - A case study of English Language Learning
Yusuke Kondo, Michiko Nakano, Shigeichi Hirasawa, Daiki Koizumi, Tomohiko Saito
Proceeding of the 74th National Convention of IPSJ 4-475 - 4-476 2012.03
e-Learning using the virtual desktop - authentication method
Katsuyuki Umezawa, Daiki Koizumi, Makoto Nakazawa, Shigeichi Hirasawa
Proceeding of the 74th National Convention of IPSJ 4-451 - 4-452 2012.03
On the Threshold Learning Algorithm of SQL Injection Attack Detection by the Feature of Single Character
Daiki Koizumi, Takeshi Matsuda, Michio Sonoda, Shigeichi Hirasawa
Proceeding of the 74th National Convention of IPSJ 3-559 - 3-560 2012.03
Approach to the Feature Extraction of Attack Character String and the Automatic Detection of a Web Application Attack
Mishio Sonoda, Takeshi Matsuda, Daiki Koizumi, Shigeichi Hirasawa, Shigeo Tsujii
Proceeding of the 74th National Convention of IPSJ 3-557 - 3-558 2012.03
Fault Diagnosis Algorithm in Multi-Computer Systems based on Lagrangian Relaxation Method
Shunsuke Horii, Manabu Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
2012 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS (ISITA 2012) 712 - 716 2012 [Refereed]
View Summary
We propose new algorithms for fault diagnosis problem based on the dual decomposition method and the augmented Lagrangian method. Our algorithms are convergent and those outputs are same as that of Linear Programming (LP) based fault diagnosis algorithm. The proposed algorithms have smaller computational complexity than ordinary LP solver. Experimental results show the practical potentials of the proposed algorithms.
An Error Probability Estimation of the Document Classification Using Markov Model
Manabu Kobayashi, Hiroshi Ninomiya, Toshiyasu Matsushima, Shigeichi Hirasawa
2012 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS (ISITA 2012) 717 - 721 2012 [Refereed]
View Summary
The document classification problem has been investigated by various techniques, such as a vector space model, a support vector machine, a random forest, and so on. On the other hand, J.Ziv et al. have proposed a document classification method using Ziv-Lempel algorithm to compress the data. Furthermore, the Context-Tree Weighting (CTW) algorithm has been proposed as an outstanding data compression, and for the document classification using the CTW algorithm experimental results have been reported. In this paper, we assume that each document with same category arises from Markov model with same parameters for the document classification. Then we propose an analysis method to estimate a classification error probability for the document with the finite length.
仮想化デスクトップによるeラーニングシステムにおける通信品質が与える影響について
中澤 真, 小泉 大城, 近藤 知子, 梅澤 克之, 平澤 茂一
日本eラーニング学会 2011年度学術講演会 発表論文集 92 - 98 2011.12
クラウド環境上の仮想化デスクトップを用いたeラーニング
小泉 大城, 斉藤 友彦, 中澤 真, 中野 美知子, 平澤 茂一
日本eラーニング学会 2011年度学術講演会 発表論文集 86 - 91 2011.12
On Automatic Detection of SQL Injection Attacks by the Feature Extraction of the Single Character
Michio Sonoda, Takeshi Matsuda, Daiki Koizumi, Shigeichi Hirasawa
Proceedings of the 4th international conference on Security of information and networks (SIN2011) 81 - 86 2011.11 [Refereed]
クラウド環境上の仮想化デスクトップによるeラーニングシステム
小泉 大城, 近藤 知子, 中澤 真, 梅澤 克之, 平澤 茂一
平成23年度私立大学情報教育協会 教育改革ICT戦略大会 資料 194 - 195 2011.09
A Note on the Linear Programming Decoding of Binary Linear Codes for Multiple-Access Channel
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E94A ( 6 ) 1230 - 1237 2011.06 [Refereed]
View Summary
In this paper, we develop linear-programming (LP) decoding for multiple-access channels with binary linear codes. For single-user channels, LP decoding has attracted much attention in recent years as a good approximation to maximum-likelihood (ML) decoding. We demonstrate how the ML decoding problem for multiple-access channels with binary linear codes can be formulated as an LP problem. This is not straightforward, because the objective function of the problem is generally a non-linear function of the codeword symbols. We introduce auxiliary variables such that the objective function is a linear function of these variables. The ML decoding problem then reduces to the LP problem. As in the case for single-user channels, we formulate the relaxed LP problem to reduce the complexity for practical implementation, and as a result propose a decoder that has the desirable property known as the ML certificate property (i.e., if the decoder outputs an integer solution, the solution is guaranteed to be the ML codeword). Although the computational complexity of the proposed algorithm is exponential in the number of users, we can reduce this complexity for Gaussian multiple-access channels. Furthermore, we compare the performance of the proposed decoder with a decoder based on the sum-product algorithm.
Michio Sonoda, Takeshi Matsuda, Daiki Koizumi, Shigeichi Hirasawa
IPSJ SIG Technical Notes 49 1 - 7 2011.03
On the capacity of fingerprinting codes against unknown size of colluders
Gou Hosoya, Hideki Yagi, Manabu Kobayashi, Shigeichi Hirasawa
Proceedings of the 2011 7th International Conference on Information Assurance and Security, IAS 2011 234 - 239 2011 [Refereed]
View Summary
In this paper, a new attack model in which the number of colluders are distributed according to a certain probability distribution is introduced. Two classes of collusion attacks which include well-known collusion attacks in the context of multimedia fingerprinting are provided. For these two attack classes, achievable rates for the unknown size of the actual colluders are derived. Based on the derived achievable rates, achieve rates for some particular attacks are investigated. For the AND attack, the bound derived in this paper coincides with the previous known bound, although the attack model in this paper does not assume that the decoder knows the actual number of colluders. Moreover, for the averaging attack, it is clarified that derived achievable rate is larger than previously known bound with random linear codes. © 2011 IEEE.
Disk Allocation Methods for Cartesian Product Files Using Unequal Error Protection Codes
Tomohiko Saito, Hiroshige Inazumi, Toshiyasu Matsushima, Shigeichi Hirasawa
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 2437 - 2442 2011 [Refereed]
View Summary
Allocation methods for Cartesian product files on multiple disks using linear error-correcting codes are discussed. In this paper, we propose an allocation method using unequal error protection (UEP) codes. Codewords of an UEP code have some special bits which are protected against a greater number of errors than the other bits. We firstly assume a model that "*", which means "don't care", appears with different probability in each attribute of queries. In this case, the average access time can be calculated using the split distance distribution. Finally, we illustrate the average access time of the methods using UEP codes.
System Evaluation of Disk Allocation Methods for Cartesian Product Files by using Error Correcting Codes
Shigeichi Hirasawa, Tomohiko Saito, Hiroshige Inazumi, Toshiyasu Matsushima
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 2443 - 2448 2011 [Refereed]
View Summary
We discuss disk allocation methods for Cartesian product files by introducing error correcting codes, and have clarified the performance of the methods by system evaluation models developed by using rate distortion theory. Let us assume q(n) Cartesian product files with n attributes and q actual values in each attribute, and store q(n) files into G(<= q(n)) disks. For a partial match access request, we represent new disk allocation methods which able to access the disks in parallel as much as possible, where the partial match access request includes an indefinite case (don't care: "*") in some attributes and the * requires to access the files with corresponding to the attribute for the all actual attribute values. In this paper, we propose to apply unequal error protection codes to the case where the probabilities of occurrence of the * in the attributes for a partial match access request are not the same. We show the disk allocation methods have desirable properties as n becomes large.
Probabilistic Fault Diagnosis and its Analysis in Multicomputer Systems
Manabu Kobayashi, Toshinori Takabatake, Toshiyasu Matsushima, Shigeichi Hirasawa
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 1205 - 1211 2011 [Refereed]
View Summary
F.P.Preparata et al. have proposed a fault diagnosis model to find all faulty units in the multicomputer system by using outcomes which each unit tests some other units. In this paper, for probabilistic diagnosis models, we show an efficient diagnosis algorithm to obtain a posteriori probability that each of units is faulty given the test outcomes. Furthermore, we propose a method to analyze the diagnostic error probability of this algorithm.
An Authentication System using Smart Phones as Secure Storage
Katsuyuki Umezawa, Satoru Tezuka, Shigeichi Hirasawa
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 415 - 420 2011 [Refereed]
View Summary
The smart phone penetration rate has increased recently. There are also many commercial terminals for receiving network services. When a service is received by a smart phone or PC, user authentication is very important for ensuring secure and safe use. Currently, authentication is required each time a user changes the terminal on which a service is received. We propose a system that uses a smart phone as a storage device for authentication information, such as ID, password, and cookie information. With this system, the smart phone and various terminals cooperate through short distance wireless telecommunications technologies such as Bluetooth. We evaluate performance of our proposed system. As a result, the user can input authentication information to and receive a service on a terminal by simply swiping the smart phone over the terminal.
On Predictive Errors of SQL Injection Attack Detection by the Feature of the Single Character
Takeshi Matsuda, Daiki Koizumi, Michio Sonoda, Shigeichi Hirasawa
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 1722 - 1727 2011 [Refereed]
View Summary
The sigmoid function has been widely used in the problem of the two value distinction. In this paper, we prepare a function which is similar to a sigmoid function, and show that our proposed model is valid to the SQL Injection attack detection than the method using a sigmoid function.
A Note on the Branch-and-Cut Approach to Decoding Linear Block Codes
Shunsuke Horii, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E93A ( 11 ) 1912 - 1917 2010.11 [Refereed]
View Summary
Maximum likelihood (ML) decoding of linear block codes can be considered as an integer linear programming (ILP) Since it is an NP hard problem in general there are many researches about the algorithms to approximately solve the problem One of the most popular algorithms is linear programming (LP) decoding proposed by Feldman et al LP decoding is based on the LP relaxation which Is a method to approximately solve the ILP corresponding to the ML decoding problem Advanced algorithms for solving ILP (approximately or exactly) include cutting plane method and branch and bound method As applications of these methods adaptive LP decoding and branch and bound decoding have been proposed by Taghavi et al and Yang et al respectively Another method for solving ILP is the branch and cut method which is a hybrid of cutting plane and branch and bound methods The branch and cut method is widely used to solve ILP however it is unobvious that the method works well for the ML decoding problem In this paper we show that the branch and cut method is certainly effective for the ML decoding problem Furthermore the branch and cut method consists of some technical components and the performance of the algorithm depends on the selection of these components It is important to consider how to select the technical components in the branch and cut method We see the differences caused by the selection of those technical components and consider which scheme is most effective for the ML decoding problem through numerical simulations
An accurate density evolution analysis for a finite-state Markov channel
Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
Journal of Discrete Mathematical Sciences and Cryptography 13 ( 1 ) 85 - 97 2010 [Refereed]
View Summary
In this paper, we consider density evolution analyses of low-density parity-check (LDPC) codes for a finite-state Markov channel (FSMC). Since operations in the subgraph corresponding to the estimation process do not satisfy symmetry conditions, all densities involved in each value of the channel state should be kept track. In [6], to avoid the complexity to compute plural pdfs, only one pdf involved in the channel state is computed exploiting marginalization operations. We suppose that this approach is not accurate enough to track the estimation process of the joint estimation-decoding. In this paper, we present a procedure of the density evolution analysis of LDPC codes for an FSMC of which all densities involved in each value of the channel state are kept track. © 2010 Taylor &
Francis Group, LLC.
A note on automatic construction algorithms for orthogonal designs of experiments using error-correcting codes
Tomohiko Saito, Toshiyasu Matsushima, Shigeichi Hirasawa
Journal of Discrete Mathematical Sciences and Cryptography 13 ( 4 ) 369 - 381 2010 [Refereed]
View Summary
In the field of experimental design, it is important to construct orthogonal designs. In this paper, we propose a new algorithm to construct orthogonal design. This algorithm uses Ukita’s algorithm, which is essentially based on projective geometries, and uses orthogonal designs constructed by error-correcting codes. We show some numerical examples of the proposed algorithm, and show that the proposed algorithm can construct good orthogonal designs with low complexity even if there are high order effects. © 2010 Taylor &
Francis Group, LLC.
Asymptotic property of universal lossless coding for independent piecewise identically distributed sources
Tota Suko, Toshiyasu Matsushima, Shigeichi Hirasawa
Journal of Discrete Mathematical Sciences and Cryptography 13 ( 4 ) 383 - 391 2010 [Refereed]
View Summary
The universal lossless source coding problem is one of the most important problem in communication systems. The aim of source coding is to compress data to reduce costs in digital communication. Traditional universal source coding schemes are usually designed for stationary sources. Recently, some universal codes for nonstationary sources have been proposed. Independent piecewise identically distributed (i.p.i.d.) sources are simple nonstationary sources that parameter changes discontinuously. In this paper, we assume new i.p.i.d. sources class, and we prove that Bayes codes minimize the mean redundancy when parameter transition pattern is known and parameter is unknown. © 2010 Taylor &
Francis Group, LLC.
A note on performance of generalized tail biting trellis codes
Shigeichi Hirasawa, Masao Kasahara
Journal of Discrete Mathematical Sciences and Cryptography 13 ( 2 ) 105 - 122 2010 [Refereed]
View Summary
Tail biting trellis codes and block concatenated codes are discussed from random coding arguments. Error exponents and decoding complexity for generalized tail biting (GTB) random trellis codes, and their relationships are derived, where the GTB trellis codes consist of full tail biting (FTB) trellis codes, partial tail biting (PTB) trellis codes and direct truncated (DT) trellis codes. We show that the PTB trellis codes at all rates except for low rates are superior to all of the GTB trellis codes, in a sense that they have smaller upper bound on the probability of decoding error for given decoding complexity. We then propose the generalized version of the block concatenated codes constructed by the GTB trellis inner codes, and derive error exponents and the decoding complexity for the proposed code. The results obtained show that the DT trellis inner codes are effective among the GTB trellis inner codes for constructing the generalized version of the concatenated codes to keep the same decoding complexity as the original concatenated codes. We also show that larger error exponents are obtained by the generalized version of concatenated codes, if the decoding complexity is allowed to be larger than that of the original concatenated code, although it is still in polynomial order. © 2010 Taylor &
Francis Group, LLC.
On a New Model for Automatic Text Categorization Based on Vector Space Model
Makoto Suzuki, Naohide Yamagishi, Takashi Ishidat, Masayuki Gotot, Shigeichi Hirasawa
IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2010) 3152 - 3159 2010 [Refereed]
View Summary
In our previous paper, we proposed a new classification technique called the Frequency Ratio Accumulation Method (FRAM). This is a simple technique that adds up the ratios of term frequencies among categories, and it is able to use index terms without limit. Then, we adopted the Character N-gram to form index terms, thereby improving FRAM. However, FRAM did not have a satisfactory mathematical basis. Therefore, we present here a new mathematical model based on a "Vector Space Model" and consider its implications. The proposed method is evaluated by performing several experiments. In these experiments, we classify newspaper articles from the English Reuters-21578 data set, a Japanese CD-Mainichi 2002 data set using the proposed method. The Reuters-2I578 data set is a benchmark data set for automatic text categorization. It is shown that FRAM has good classification accuracy. Specifically, the micro-averaged F-measure of the proposed method is 92.2% for English. The proposed method can perform classification utilizing a single program and it is language-independent.
On the Condition of epsilon-Transmissible Joint Source-Channel Coding for General Sources and General Channels
Ryo Nomura, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A ( 11 ) 2936 - 2940 2009.11 [Refereed]
View Summary
The joint source-channel coding problem is considered. The joint source-channel coding theorem reveals the existence of a code for the pair of the source and the channel under the condition that the error probability is smaller than or equal to epsilon asymptotically. The separation theorem guarantees that we can achieve the optimal coding performance by using the two-stage coding. In the case that epsilon = 0, Han showed the joint source-channel coding theorem and the separation theorem for general sources and channels. Furthermore the epsilon-coding theorem (0 <= epsilon < 1) in the similar setting was studied. However, the separation theorem was not revealed since it is difficult in general. So we consider the separation theorem in this setting.
Adaptive Decoding Algorithms for Low-Density Parity-Check Codes over the Binary Erasure Channel
Gou Hosoya, Hideki Yagi, Manabu Kobayashi, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A ( 10 ) 2418 - 2430 2009.10 [Refereed]
View Summary
Two decoding procedures combined with a belief-propagation (BP) decoding algorithm for low-density parity-check codes over the binary erasure channel are presented. These algorithms continue a decoding procedure after the BP decoding algorithm terminates. We derive a condition that our decoding algorithms can correct an erased bit which is uncorrectable by the BP decoding algorithm. We show by simulation results that the performance of our decoding algorithms is enhanced compared with that of the BP decoding algorithm with little increase of the decoding complexity.
On the Hyperparameter Estimation of Time Varying Poisson Model for Bayesian WWW Traffic Forecasting
Daiki Koizumi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2nd International Workshop of the European Research Consortium on Infomatics and Mathematics (ERCIM) Working Group on Computing & Statistics 22 - 22 2009.10 [Refereed]
Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model
Daiki Koizumi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of The 2009 International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'09) 2 683 - 689 2009.07 [Refereed]
Text Classification Using the Sum of Frequency Ratios of Word and N-gram Over Categories
SUZUKI Makoto, HIRASAWA Shigeichi
IEEJ Transactions on Electronics, Information and Systems 129 ( 1 ) 118 - 124 2009.01
View Summary
In this paper, we consider the automatic text classification as a series of information processing, and propose a new classification technique, namely, "Frequency Ratio Accumulation Method (FRAM)". This is a simple technique that calculates the sum of ratios of term frequency in each category. However, it has a desirable property that feature terms can be used without their extraction procedure. Then, we use "character N-gram" and "word N-gram" as feature terms by using this property of our classification technique. Next, we evaluate our technique by some experiments. In our experiments, we classify the newspaper articles of Japanese "CD-Mainichi 2002" and English "Reuters-21578" using the Naive Bayes (baseline method) and the proposed method. As the result, we show that the classification accuracy of the proposed method improves greatly compared with the baseline. That is, it is 89.6% for Mainichi, 87.8% for Reuters. Thus, the proposed method has a very high performance. Though the proposed method is a simple technique, it has a new viewpoint, a high potential and is language-independent, so it can be expected the development in the future.
Fingerprinting Codes for Multimedia Data against Averaging Attack
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E92A ( 1 ) 207 - 216 2009.01 [Refereed]
View Summary
Code construction for digital fingerprinting, which is a copyright protection technique for multimedia, is considered. Digital fingerprinting should deter collusion attacks, where several fingerprinted copies of the same content are mixed to disturb their fingerprints. In this paper, we consider the averaging attack, which is known to be effective for multimedia fingerprinting with the spread spectrum technique. We propose new methods tor constructing fingerprinting codes to increase the coding rate of conventional fingerprinting codes, while they guarantee to identify the same number of colluders. Due to the new fingerprinting codes, the system can deal with a larger number of users to supply digital contents.
Reducing the Space Complexity of a Bayes Coding Algorithm using an Expanded Context Tree
Toshiyasu Matsushima, Shigeich Hirasawa
2009 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, VOLS 1- 4 719 - + 2009 [Refereed]
View Summary
The context tree models are widely used in a lot of research fields. Patricia[7] like trees are applied to the context trees that are expanded according to the increase of the length of a source sequence in the previous researches of non-predictive source coding and model selection. The space complexity of the Patricia like context trees are O(t) where t is the length of a source sequence. On the other hand, the predictive Bayes source coding algorithm cannot use a Patricia like context tree, because it is difficult to hold and update the posterior probability parameters on a Patricia like tree. So the space complexity of the expanded trees in the predictive Bayes coding algorithm is O(t(2)). In this paper, we propose an efficient predictive Bayes coding algorithm using a new representation of the posterior probability parameters and the compact context tree holding the parameters whose space complexity is O(t).
A Method for Grouping Symbol Nodes of Group Shuffled BP Decoding Algorithm
Yoshiyuki Sato, Gou Hosoya, Hideki Yagi, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E91A ( 10 ) 2745 - 2753 2008.10 [Refereed]
View Summary
In this paper, we propose a method for enhancing performance of a sequential version of the belief-propagation (BP) decoding algorithm, the group shuffled BP decoding algorithm for low-density parity-check (LDPC) codes. An improved BP decoding algorithm, called the shuffled BP decoding algorithm, decodes each symbol node in serial at each iteration. To reduce the decoding delay of the shuffled BP decoding algorithm, the group shuffled BP decoding algorithm divides all symbol nodes into several groups. In contrast to the original group shuffled BP, which automatically generates groups according to symbol positions, in this paper we propose a method for grouping symbol nodes which generates groups according to the structure of a Tanner graph of the codes. The proposed method can accelerate the convergence of the group shuffled BP algorithm and obtain a lower error rate in a small number of iterations. We show by simulation results that the decoding performance of the proposed method is improved compared with those of the shuffled BP decoding algorithm and the group shuffled BP decoding algorithm.
Density Evolution Analysis of Robustness for LDPC Codes over the Gilbert-Elliott Channel
Manabu Kobayashi, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E91A ( 10 ) 2754 - 2764 2008.10 [Refereed]
View Summary
In this paper. we analyze the robustness for low-density parity-check (LDPC) codes over the Gilbert-Elliott (GE) channel. For this purpose we propose a density evolution method for the case where LDPC decoder uses the mismatched parameters for the GE channel. Using this method, we derive the region of tuples of true parameters and mismatched decoding parameters for the GE channel. where the decoding error probability approaches asymptotically to zero.
A Combined Matrix Ensemble of Low-Density Parity-Check Codes for Correcting a Solid Burst Erasure
Gou Hosoya, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E91A ( 10 ) 2765 - 2778 2008.10 [Refereed]
View Summary
A new ensemble of low-density parity-check (LDPC) codes for correcting a solid burst erasure is proposed. This ensemble is an instance of a combined matrix ensemble obtained by concatenating some LDPC matrices. We derive a new bound on the critical minimum span ratio of stopping sets for the proposed code ensemble by modifying the bound for ordinary code ensemble. By calculating this bound, we show that the critical minimum span ratio of stopping sets for the proposed code ensemble is better than that of the conventional one with keeping the same critical exponent of stopping ratio for both ensemble. Furthermore from experimental results, we show that the average minimum span of stopping sets for a solid burst erasure of the proposed codes is larger than that of the conventional ones.
A Discovering Method of Plagiarism Documents based on Word Sequence Kernel
Saimoto Shinya, Kumoi Gendo, Ishida Takashi, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2008 115 - 115 2008
View Summary
Recently, the copyright infringement of a digital document through the WEB pages such as the blog etc. becomes a serious matter by the recent development of the information technology. However, it is difficult to investigate manually the plagiarism document because of an increase of a digital document. This study proposes the method for discovering the document with the doubt of the plagiarism automatically from the targeted document. This method uses the plagiarism sentence judgment algorithm based on Word Sequence Kernel (WSK) to improve the part that was not able to be detected by the method of using a Smith-Waterman algorithm. The evaluation experiment using the document plagiarized based on the newspaper article or the Web page shows the effectiveness of the proposed method.
Kumoi Gendo, Ishida Takashi, Goto Masayuki, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2008 74 - 74 2008
View Summary
Recently, the copyright infringement of the digital document has become a serious problem by the spread of the Internet.One of the problem is creating summaries of documents without the author's permission.In this paper, the newspaper article is targeted documents requiring protection against copyright infringement.In the technique for discovering the digital document summarized from the newspaper article a lot of partially similar articles, it is known that the editorial where there exinst makes lower detection accuracy.Therefore, a method for specifying the summary sentence created from the editorial is proposed.In order to prove the effectiveness of our proposed methods,evaluation experiments using newspaper articles and the manually created summaries are shown.
Multiuser detection algorithm for CDMA based on the belief propagation algorithm
Shunsuke Horii, Tota Suko, Toshiyasu Matsushima, Shigeichi Hirasawa
IEEE International Symposium on Spread Spectrum Techniques and Applications 194 - 199 2008 [Refereed]
View Summary
Optimum detection for the multiuser code-division multiple-access channel is prohibitively complex. This paper considers new iterative multiuser detection algorithm based on the belief propagation algorithm. Previously, the idea to apply the belief propagation algorithm to multiuser detection problem was suggested , however, it was believed that to apply the belief propagation algorithm to the detection problem is impossible because it requires an exponentially large amount of computation. It was the only fact that the parallel interference canceller is derived as an approximation of the belief propagation. In this paper, we show that the belief propagation algorithm can be applied to the detection problem by converting the factor graph structure. Performance of the detector based on the belief propagation algorithm is better than that of the parallel interference canceller. © 2008 IEEE.
On the Condition of epsilon-Transmissibility in Joint Source-Channel Coding for Mixed Sources and Mixed Channels
Ryo Nomura, Toshiyasu Matsushima, Shigeichi Hirasawa
2008 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS, VOLS 1-3 1299 - + 2008 [Refereed]
View Summary
In this study, we consider a joint source-channel coding problem. The joint source-channel coding theorem reveals the existence of a code for the pair of the source V and the channel W under the condition that the probability of error is smaller than or equal to E asymptotically. We show the separation theorem for general sources and general channels.
Error control codes for parallel channel with correlated errors
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
2008 IEEE INFORMATION THEORY WORKSHOP 421 - + 2008 [Refereed]
View Summary
This paper introduces two channel models of correlated parallel channels. Then we analyze structure of error correcting codes over these correlated parallel channels. We derive necessary and sufficient conditions for these codes and some code construction is presented. We also show some upper and lower bounds on the coding rate of the error correcting codes for correlated parallel channels. The introduced channel models are related to burst error channels and the codes analyzed in this paper can be used as asymmetric interleaving codes for burst error channels.
Random Coding Bounds for Correlated Parallel Channels with Unidirectionally Cooperating Decoders
Hideki Yagi, Manabu Kobayashi, Shigeichi Hirasawa
2008 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS, VOLS 1-3 121 - + 2008 [Refereed]
View Summary
We propose a maximum likelihood decoding scheme with decoder's partial cooperation, in which only unidirectional information passing is possible, for correlated parallel channels. We derive a bound on the probability of decoding error for the proposed scheme with a coset code ensemble by using randomized technique. From the property of the derived error exponent, we show a condition that the scheme achieves the capacity region. We compare the proposed scheme with other possible scheme with decoder's unidirectional cooperation.
An Efficient Design of Irregular LDPC Codes using Beta Approximation for the Gilbert-Elliott Channel
Manabu Kobayashi, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
2008 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS, VOLS 1-3 1543 - + 2008 [Refereed]
View Summary
In this paper, we investigate the design of low-density parity-check (LDPC) codes for the Gilbert-Elliott (GE) channel. Recently, Eckford et al. proposed a design method of irregular LDPC codes using approximate density-evolution (DE) for Markov channels [7]. In the design method proposed by Eckford et al., the probability density function (PDF) of the messages from variable nodes to check nodes is approximated by the Gaussian distribution. In this paper, we first show the method to obtain the accurate PDF of the messages from variable nodes to check nodes by utilizing two DE steps for the Gaussian distribution. We call this method the iterative density approximation (IDA). Using this method, we can design the good LDPC codes. Next, we propose an efficient design method of irregular LDPC codes by using Beta approximation to the PDF of the channel state probability for the GE channel. Consequently, we show that the complexity to calculate PDFs of the channel messages is considerably reduced though the rates of LDPC codes obtained by using the proposed approximation are almost the same as that of the IDA method.
Asymptotic Evaluation of Distance Measure on High Dimensional Vector Spaces in Text Mining
Masayuki Goto, Takashi Ishida, Makoto Suzuki, Shigeichi Hirasawa
2008 INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY AND ITS APPLICATIONS, VOLS 1-3 439 - + 2008 [Refereed]
View Summary
This paper discusses the document classification problems in text mining from the viewpoint of asymptotic statistical analysis. In the problem of text mining, the several heuristics axe applied to practical analysis because of its experimental effectiveness in many case studies. The theoretical explanation about the performance of text mining techniques is required and such thinking will give us very clear idea. In this paper, the performances of distance measures used to classify the documents axe analyzed from the new viewpoint of asymptotic analysis. We also discuss the asymptotic performance of IDF measure used in the information retrieval field.
A note on the epsilon-overflow probability of lossless codes
Ryo Nomura, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E90A ( 12 ) 2965 - 2970 2007.12 [Refereed]
View Summary
In this letter, we generalize the achievability of variable-length coding from two viewpoints. One is the definition of an overflow probability, and the other is the definition of an achievability. We define the overflow probability as the probability of codeword length, not per symbol, is larger than eta(n) and we introduce the epsilon-achievability of variable-length codes that implies an existence of a code for the source under the condition that the overflow probability is smaller than or equal to epsilon. Then we show that the epsilon-achievability of variable-length codes is essentially equivalent to the epsilon-achievability of fixed-length codes for general sources. Moreover by using above results, we show the condition of epsilon-achievability for some restricted sources given epsilon.
Reliability-Based Hybrid ARQ Scheme with Encoded Parity Bit Retransmissions and Message Passing Decoding
Daiki Koizumi, Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E90-A ( 9 ) 1736 - 1744 2007.09 [Refereed]
View Summary
Reliability-based hybrid ARQ (RB-HARQ) is a kind of incremental-redundancy ARQ recently introduced. In the RB-HARQ, the receiver returns both NAK signal and set of unreliable bit indices if the received sequence is not accepted. Since each unreliable bit index is determined by the bitwise posterior probability, better approximation of that probability becomes crucial as the number of retransmissions increases. Assuming the systematic code for the initial transmission, the proposed RB-HARQ scheme has the following two features: (a) the sender retransmits newly encoded and interleaved parity bits corresponding to the unreliable information bits; (b) the retransmitted parity bits as well as the initial received sequence are put all together to perform the message passing decoding i.e. the suboptimal MAP decoding. Finally, simulation results are shown to evaluate the above two features.
Short concatenated fingerprinting codes for multimedia data
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
45th Annual Allerton Conference on Communication, Control, and Computing 2007 2 1040 - 1045 2007
View Summary
Digital fingerprinting , a copyright protection tech-nique of multimedia data , is considered. For digital finger-printing , measures against collusion attacks , where several fingerprinted copies of the same content are mixed to disturb their fingerprints , should be taken. In this paper , we propose a shortening method of collusion-secure fingerprinting codes based on finite geometries , which reduces the code length with keeping the number of codewords and the codes' resilience. We also propose a concatenated coding method which combines a conventional collusion-secure fingerprinting code and an error correcting code for increasing the coding rate , ue to the new fingerprinting codes , the system can deal with a larger number of users to distribute a digital content.
A class of prior distributions on context tree models and an efficient algorithm of the Bayes codes assuming it
Toshiyasu Matsushima, Shigeich Hirasawa
ISSPIT 2007 - 2007 IEEE International Symposium on Signal Processing and Information Technology 938 - 941 2007 [Refereed]
View Summary
The CTW(Context Tree Weighting) algorithm is an efficient universal coding algorithm on context tree models. The CTW algorithm has been interpreted as the non-predictive Bayes coding algorithm assuming a special prior distribution over context tree models. An efficient recursive calculation method using a gathering context tree in the CTWalgorithm is well known. Although there exist efficient recursive algorithms for the Bayes codes assuming a special class of prior distributions, the basic property ofthe prior distribution class has been scarcely investigated. In this paper we show the exact definition of a prior distribution class on context tree models that has the similar property to the class of conjugate priors. We show the posterior distribution is also included in the same distribution class as the prior distribution class. So we can also construct an efficient algorithm ofpredictive Bayes codes on context tree models by using the prior distribution class. Lastly the asymptotic mean code length of the codes IS investigated. ©C2007 IEEE.
The construction of periodically time-variant convolutional codes using binary linear block codes
Naonori Ogasahara, Manabu Kobayashi, Shigeichi Hirasawa
ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE 90 ( 9 ) 31 - 40 2007 [Refereed]
View Summary
In 1996 Rosenthal and York proposed (time-invariant) BCH convolutional codes [4] in which the parity check matrix of a BCH code is used in the construction of the convolutional code. The lower bound on the minimum free distance of a BCH convolutional code is guaranteed by the BCH limit. In this paper we propose a periodically time-variant convolutional code that can be constructed not only using the BCH parity check matrix but using the check matrix of any binary linear block code and show that the lower bound on the minimum free distance is guaranteed by the minimum free distance of the binary linear block code. In addition, taking 12 binary linear block codes as examples, we perform comparisons of the proposed codes with BCH convolutional codes using three evaluation criteria (minimum free distance, number of delay elements, coding rate) and show that there exist proposed codes that are superior to existing ones. (C) 2007 Wiley Periodicals, Inc. Electron Comin Jpn Pt 3, 90(9):31-40,2007; Published online in Wiley InterScience (www.interscience. wiley.com).
On error exponents for a variable size list decoder using the Viterbi algorithm with likelihood testing
Toshihiro Niinomi, Toshiyasi Matsushima, Shigeichi Hirasawa
ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE 90 ( 6 ) 27 - 36 2007 [Refereed]
View Summary
The likelihood ratio (LR) decision criterion [3] employed by Yamamoto and Itoh's decision feedback (ARQ) scheme [1] is known to be highly compatible with the Viterbi algorithm (VA) due to the fact that it requires a small amount of computational effort. In addition, Forney has derived an extension of the coding theorem to the variable size list decoder for maximum likelihood decoding of block codes. In this paper we propose an algorithm for extending LR-based variable size list decoding for decoding convolutional codes using the Viterbi algorithm; assuming a discrete memoryless channel we also derive a lower bound on the decoder's error exponents. (C) 2007 Wiley Periodicals, Inc.
A note on error correction schemes using LDPC codes with a high-capacity feedback channel
Naoto Kobayashi, Toshiyasu Matsushima, Shigeich Hirasawa
2007 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY PROCEEDINGS, VOLS 1-7 2381 - 2385 2007 [Refereed]
View Summary
In this paper, transmission schemes with noiseless and high capacity feedback channel is considered. We propose two types of transmission schemes using LDPC codes and clarify the density evolution analysis method for these proposed schemes. We investigate the performance of the proposed schemes by the density evolution analysis and computer simulations. The result shows some interesting characteristics for schemes with high capacity feedback channel.
On the epsilon-overflow probability of lossless codes
Ryo Nomura, Toshiyasu Matsushima, Shigeichi Hirasawa
2007 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY PROCEEDINGS, VOLS 1-7 441 - + 2007 [Refereed]
View Summary
In this paper, we generalize the achievability of variable-length coding from two viewpoints. One is the definition of an overflow probability, and the other is the definition of an achievability. We define the overflow probability as the probability of codeword length, not per symbol, is larger than eta(n) and we introduce the epsilon-achievability of variable-length codes that implies an existence of a code for the source under the condition that the overflow probability is smaller than or equal to epsilon.
Then we show that the epsilon-achievabitity of variable-length codes is essentially equivalent to the epsilon-achievability of fixed-length codes for general sources. Moreover we show the condition of epsilon-achievability for some restricted sources given epsilon.
Statistical evaluation of measure and distance on document classification problems in text mining
Masayuki Goto, Takashi Ishida, Shigeichi Hirasawa
2007 CIT: 7TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY, PROCEEDINGS 674 - + 2007 [Refereed]
View Summary
This paper discusses the document classification problems in text mining from the viewpoint of asymptotic statistical analysis. By formulation of statistical hypotheses test which is specified as a problem of text mining, some interesting properties can be visualized. In the problem of text mining, the several heuristics are applied to practical analysis because of its experimental effectiveness in many case studies. The theoretical explanation about the performance of text mining techniques is required and this approach will give us very clear idea. The distance measure in word vector space is used to classify the documents. In this paper the performance of distance measure is also analized from the new viewpoint of asymptotic analysis.
Word segmentation for the sequences emitted from a word-valued source
Takashi Ishida, Toshiyasu Matsushima, Shigeichi Hirasawa
2007 CIT: 7TH IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY, PROCEEDINGS 662 - 667 2007 [Refereed]
View Summary
Word segmentation is the most fundamental and important process for Japanese or Chinese language processing. Because there is no separation between words in these languages, we firstly have to separate the sequence into words. On this problem, it is known that the approach by probabilistic language model is highly efficient, and this is shown practically. On the other hand, recently, a word-valued source has been proposed as a new class of source model for the source coding problem. This model can be supposed to reflect more of the probability structure of natural languages. We may regard Japanese sentence or Chinese sentence as the sequence emitting from a non-prefix-free WVS. In this paper as the first phase of applying WVS to natural language processing, we formulate a word segmentation problem for the sequence from non-prefix-free WVS. Then, we examine the performance of word segmentation for the models by numerical computations.
A Note on HTTP Traffic Analysis of the Time Series Model with a Time Varying Density Parameter
Daiki Koizumi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. of the 28th Symposium on Information Theory and Its Applications (SITA2005) 2 729 - 732 2005.12
インターネットトラヒックのポアソン分布に従う非定常な時系列モデルを用いた解析に関する一考察
小泉 大城, 松嶋 敏泰, 平澤 茂一
第4回情報科学技術フォーラム(FIT2005)講演論文集 4 35 - 38 2005.08
A Study of Reliability Based Hybrid ARQ Scheme with Bitwise Posterior Probability Evaluation from Message Passing Algorithm
Daiki Koizumi, Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
Technical Report of IEICE (IT2005-24) 105 ( 85 ) 11 - 16 2005.05
Matrix Clustering Method Achieving Specific Accuracy by Modified Apriori Algorithm
Kobayashi Manabu, Matsushima Toshiyasu, Hirasawa Shigeichi
Abstracts of Annual Conference of Japan Society for Management Information 2005 4 - 4 2005
View Summary
Matrix clustering is defined as a method to obtain submatrices with specified area and density ( the ratio of 1 element about submatrix ) for the given sparse binary matrix.This method has been proposed as a data mining technique for Customer Relationship Management(CRM) and makes it possible to cluster the related items and customers.In this paper, we extend the apriori algorithm and propose a new matrix clustering algorithm which obtains all submatrices satisfying some specific condition.As a result, it is possible to analyze the purchase information of the customers in detail.
A Document Classification Method and its Application to Questionnaire Analyses
Hirasawa Shigeichi, Ishida Takashi, Adachi Hiroshi, Gotoh Masayuki, Sakai Tetsuya
Abstracts of Annual Conference of Japan Society for Management Information 2005 60 - 60 2005
View Summary
Using information retrieval techniques especially based upon probabilistic latent semantic indexing (PLSI) model, we discuss on methods for classification and clustering of documents.
A note on a decoding algorithm of codes on graphs with small loops
N Kobayashi, T Matsushima, S Hirasawa
Proceedings of the IEEE ITSOC Information Theory Workshop 2005 on Coding and Complexity 109 - 113 2005 [Refereed]
View Summary
The best-known algorithm for the decoding of low-density parity-check (LDPC) codes is the sum-product algorithm (SPA). The SPA is a message-passing algorithm on a graphical model called a factor graph (FG). The performance of the SPA depends on a structure of loops in a FG. Pearl showed that loops in a graphical model could be erased by the clustering method. This method clusters plural nodes into a single node. In this paper, we show several examples about a decoding on a FG to which the clustering method is applied. And we propose an efficient decoding algorithm for it.
Representation method for a set of documents from the viewpoint of Bayesian statistics
Masayuki Goto, Takashi Ishida, Shigeichi Hirasawa
Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 5 4637 - 4642 2003
View Summary
In this paper, we consider the Bayesian approach for representation of a set of documents. In the field of representation of a set of documents, many previous models, such as the latent semantic analysis (LSA), the probabilistic latent semantic analysis (PLSA), the Semantic Aggregate Model (SAM), the Bayesian Latent Semantic Analysis (BLSA), and so on, were proposed. In this paper, we formulate the Bayes optimal solutions for estimation of parameters and selection of the dimension of the hidden latent class in these models and analyze it's asymptotic properties.
A parallel iterative decoding algorithm for zero-tail and tail-biting convolutional codes
T Matsushima, TK Matsushima, S Hirasawa
2003 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY - PROCEEDINGS 175 - 175 2003 [Refereed]
An efficient heuristic search method for maximum likelihood decoding of linear block codes using dual codes
T Okada, M Kobayashi, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E85A ( 2 ) 485 - 489 2002.02 [Refereed]
View Summary
Y.S. Han et al. have proposed an efficient maximum likelihood decoding (MLD) algorithm using A* algorithm which is the graph search method. In this paper, we propose a new MLD algorithm for linear block codes. The MLD algorithm proposed in this paper improves that given by Han et al. utilizing codewords of dual codes. This scheme reduces the number of generated codewords in the MLD algorithm. We show that the complexity of the proposed decoding algorithm is reduced compared to that given by Hall et al. without increasing the probability of decoding error.
An alternate algorithm for calculating generalized posterior probability and decoding
T Matsushima, TK Matsushima, S Hirasawa
ISIT: 2002 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS 338 - 338 2002
On the generalized viterbi algorithm using likelihood ratio testing
T Niinomi, T Matsushima, S Hirasawa
ISIT: 2002 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS 366 - 366 2002 [Refereed]
An analysis of the difference of code lengths between two-step codes based on MDL principle and bayes codes
M Goto, T Matsushima, S Hirasawa
IEEE TRANSACTIONS ON INFORMATION THEORY 47 ( 3 ) 927 - 944 2001.03 [Refereed]
View Summary
In this paper, we discuss the difference in code lengths between the code based on the minimum description length (MDL) principle (the MDL code) and the Bayes code under the condition that the same prior distribution is assumed for both codes. It is proved that the code length of the Bayes code is smaller than that of the MDL code by o(1) or O(1) for the discrete model class and by O(1) for the parametric model class. Because we can assume the same prior for the Bayes code as for the code based on the MDL principle, it is possible to construct the Bayes code with equal or smaller code length than the code based on the MDL principle. From the viewpoint of mean code length per symbol unit (compression rate), the Bayes code is asymptotically indistinguishable from the MDL two-stage codes.
An iterative algorithm for calculating posterior probability and model representation
T. Matsushima, T. K. Matsushima, S. Hirasawa
IEEE International Symposium on Information Theory - Proceedings 236 2001
View Summary
In this paper, we introduce a representation method of probability models that can be applied to any code such as turbo, LDPC or tail-biting code. Moreover, we propose an iterative algorithm that calculates marginal posterior probabilities on the introduced probability model class. The decoding error probability for the LDPC codes of the proposed algorithm is less than that of the sum-product algorithm.
A study of the decision of control parameters for adaptive automatic-repeat request strategy
K Tsukamoto, T Matsushima, S Hirasawa
ELECTRONICS AND COMMUNICATIONS IN JAPAN PART I-COMMUNICATIONS 84 ( 11 ) 61 - 70 2001 [Refereed]
View Summary
Ill many practical transmission systems, transmission error rate of the channel is time-varying. In previous works for controlling the rime-varying channel, the state of the channel is estimated and is uniquely determined. Then, the optimum value is chosen as a control parameter for the determined state. However, there is a discrepancy between the actual channel state and estimated state. The optimum control parameter determined for the estimated state does not necessarily provide the best performance presented by some object function in error control such as throughput efficiency. In th is study, the expected value of throughput is calculated by using the posterior probability for each state and an algorithm in which the control parameter is selected to maximize the expected performance is proposed. From the viewpoint of the statistical decision theory, it is shown that this selection is the optimum in maximizing the throughput under the Bayes criterion. Using simulation, the proposed method is evaluated and its effectiveness is shown. (C) 2001 Scripta Technica
Achievable rates of random number generators for an arbitrary prescribed distribution from an arbitrary given distribution
T Yoshida, T Matsushima, S Hirasawa
2000 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS 155 - 155 2000
View Summary
In this paper, we show maximal rates in the case that random number generators generate a random sequence with an arbitrary prescribed distribution from a random sequence with an arbitrary given distribution.
On the variance and the probability of length overflow of lossless codes
R Nomura, T Matsushima, S Hirasawa
2000 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS 44 - 44 2000 [Refereed]
View Summary
In this paper, we show the probability of length overflow of several codes by using the variance and the asymptotic normality of the codelength.
On decoding methods beyond the BCH bound and their applications to soft-decision decoding
M Kobayashi, T Matsushima, S Hirasawa
ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE 82 ( 9 ) 39 - 51 1999.09 [Refereed]
View Summary
For the two-dimensional BCH code, several decoding methods exceeding the BCH bound and correcting the errors that cannot be corrected by the conventional Limited distance decoding method have been proposed. This article proposes an algorithm that allows reduction of the computational volume in a decoding method exceeding the BCH bound by solving the equation for unknown variables beforehand and limiting the range of the error location. Further, this decoding method exceeding the BCH bound is applied to soft-decision decoding methods that use limited-distance decoding multiple times, and especially to Chase decoding, the decoding of Tanaka et al., and that of Kaneko et al. It is shown that the amount of computation and the decoding error rate are improved. (C) 1999 Scripta Technica.
A generalization of B.S. Clarke and A.R. Barron's asymptotics of Bayes codes for FSMX sources
M Gotoh, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E81A ( 10 ) 2123 - 2132 1998.10 [Refereed]
View Summary
We shall generalize B.S. Clarke and A.R. Barron's analysis of the Bayes method for the FSMX sources. The FSMX source considered here is specified by the set of all states and its parameter value. At first, we show the asymptotic codelengths of individual sequences of the Bayes codes for the FSMX sources. Secondly, we show the asymptotic expected codelengths. The Bayesian posterior density and the maximum likelihood estimator satisfy asymptotic normality for the finite ergodic Markov source, and this is the key of our analysis.
Type for hierarchical probability model class and universal coding
T Matsushima, S Hirasawa
1998 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY - PROCEEDINGS 319 - 319 1998 [Refereed]
On error rates of statistical model selection based on information criteria
H Gotoh, T Matsushima, S Hirasawa
1998 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY - PROCEEDINGS 417 - 417 1998 [Refereed]
View Summary
In this paper, we shall derive the upper bounds on error rates of the statistical model selection using the information criteria, P(e)*. The similar bounds were derived by J.Suzuki [2]. We shall generalize the results for the general model class.
Masayuki Gotoh, Toshiyasu Matsushima, Shigeichi Hirasawa
IEEE International Symposium on Information Theory ( IEEE ISIT97 ) 1997.06 [Refereed]
New architecture of signature analyzers for multiple-output circuits
Tomoko K. Matsushima, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 4 3900 - 3905 1997
View Summary
This paper presents a new architecture for multiple-input signature analyzers. The proposed signature analyzer with Hδ inputs is designed by parallelizing a GLFSR(δ, m), where δ is the number of input signals and m is the number of stages in the feedback shift register. The GLFSR, developed by Pradhan and Gupta, is a general framework for representing LFSR-based signature analyzers. The parallelization technique described in this paper can be applied to any kind of GLFSR signature analyzer, e.g., SISRs, MISRs, multiple MISRs and MLFSRs. It is shown that a proposed signature analyzer with Hδ inputs requires less complex hardware than either single GLFSR(Hδ, m)s or parallel construction H original GLFSR(δ, m)s. It is also shown that the proposed parallelization technique can be applied to a test pattern generator in BIST, since the GLFSR is also used to generate patterns for a CUT. The proposed technique would be practical for testing CUTs with a large number of input and output sequences, since the test circuit occupies a smaller area on the LSI chip than conventional test circuits.
Machine learning by a subset of hypotheses
Takafumi Mukouchi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceedings of the IEEE International Conference on Systems, Man and Cybernetics 3 2533 - 2538 1997
View Summary
Bayesian theory is effective in statistics, lossless, source coding, machine learning, etc. It is often, however, computationally expensive since the calculation of posterior probabilities and of mixture distributions is not tractable. In this paper, we propose a new method for approximately calculating mixture distributions in a discrete hypothesis class.
Asymptotic property of sufficient statistic codes
Toshiyasu Matsushima, Shigeichi Hirasawa
IEEE International Symposium on Information Theory - Proceedings 421 1997 [Refereed]
View Summary
In this paper, we obtain an accurate asymptotic formula of the Bayes optimal codelength in the case that the sources are not always i.i.d. sources by deriving the asymptotic codelength of sufficient statistic codes. © 1997 IEEE.
Machine learning by a subset of hypotheses
T Mukouchi, T Matsushima, S Hirasawa
SMC '97 CONFERENCE PROCEEDINGS - 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5 2533 - 2538 1997 [Refereed]
View Summary
Bayesian theory is effective in statistics, lossless source coding, machine learning, etc. It is often, however, computationally expensive since the calculation of posterior probabilities and of mixture distributions is not tractable. In this paper, we propose a new method for approximately calculating mixture distributions in a discrete hypothesis class.
A new architecture of signature analyzers for multiple-output circuits
TK Matsushima, T Matsushima, S Hirasawa
SMC '97 CONFERENCE PROCEEDINGS - 1997 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5 3900 - 3905 1997 [Refereed]
View Summary
This paper presents a near architecture for multiple-input signature analyzers. The proposed signature analyzer with H delta inputs is designed by parallelizing a GLFSR(delta,m), where delta is the number of input signals and m is the number of stages in the feedback shift register. The GLFSR, developed by Pradhan and Gupta, is a general framework for rep resenting LFSR-based signature analyzers. The parallelization technique described in this paper can be applied to any kind of GLFSR signature analyzer, e.g., SISRs: MISRs, multiple MISRs and MLFSRs. It is show that a proposed signature analyzer with HS inputs requires less complex hardware than either single GLFSR(H delta,m)s or parallel construction H original GLFSR(delta,m)s. It is also shown that the proposed parallelization technique can be applied to a test pattern generator in BIST, since the GLFSR is also used to generate patterns for a CUT. The proposed technique would be practical for testing CUTs with a large number of input and output sequences, since the test circuit occupies a smaller area on the LSI chip than conventional test circuits.
A Bayes coding algorithm for FSM sources
T Matsushima, S Hirasawa
PROCEEDINGS 1995 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY 388 - 388 1995 [Refereed]
A two-stage universal coding procedure using sufficient statistics
T Matsushima, S Hirasawa
PROCEEDINGS 1995 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY 17 - 17 1995 [Refereed]
On block coded ARQ schemes cumulating the rejected sequences
Toshihiro Niinomi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEEE International Symposium on Information Theory - Proceedings 376 1994 [Refereed]
View Summary
We show that the received sequences, which are rejected and not discarded on ARQ schemes, make the next decision with repeat request powerful by a random coding arguments. We first give the ordinary ARQ of the basic algorithm with some criterion, and define the ARQ with cumulating the rejected sequences, which naturally extended from the ordinary one. Then we study the error exponent of decoding the second received sequences under the condition that the first is rejected. We assume the block coded ARQ with noiseless feedback. © 1994 IEEE.
A Bayes coding algorithm using context tree
Toshiyasu Matsushima, Shigeichi Hirasawa
IEEE International Symposium on Information Theory - Proceedings 386 1994 [Refereed]
View Summary
The context tree weighting (CTW) algorithm [Willems et al., 1993] has high compressibility for universal coding with respect to FSMX sources. The present authors propose an algorithm by reinterpreting the CTW algorithm from the viewpoint of Bayes coding. This algorithm can be applied to a wide class of prior distribution for finite alphabet FSMX sources. The algorithm is regarded as both a generalized version of the CTW procedure and a practical algorithm using a context tree of the adaptive Bayes coding which has been studied in Mataushima et al. (1991). Moreover, the proposed algorithm is free from underflow which frequently occurs in the CTW procedure. © 1994 IEEE.
Stopping rules based on a posteriori probability for every pattern and every stage
Joe Suzuki, Toshiyasu Matsushima, Shigeichi Hirasawa, Hiroshige Inazumi
Electronics and Communications in Japan (Part III: Fundamental Electronic Science) 72 ( 8 ) 71 - 82 1989 [Refereed]
Design method of intelligent interface for minimizing the number of questions and answers
Joe Suzuki, Toshiyasu Matsushima, Shigeichi Hirasawa, Hiroshige Inazumi
Bulletin of Centre for Informatics (Waseda University) 8 41 - 48 1988.09
View Summary
Most information retrieval systems are denoted as a set of Horn clauses representing the average user's concept in order to gain solutions which satisfy some predicates about conditions demanded by the user. In this paper, the system prepares the default rules considering general user's requirements in advance. Each user inputs rules representing individual demands sequentially. Both of the rules are matched with each other, and the solution which has the highest value of the certainty factor is derived. The strategy on how we can gain the solution with the minimum number of the questions and answers is investigated based on the certainty factor computed by the minimax rule. The game theoretical framework is established and the optimal question strategy is shown in a minimax criterion. Our scheme has a large merit of applying not only to individual systems but also to general information retrieval systems.
パターンごと・ステージごとに事後確率のしきい値をおくストッピングルール
Joe Suzuki, Toshiyasu Matsushima, Hiroshige Inazumi, Shigeichi Hirasawa
vol.J71-A ( no.6 ) 1299 - 1308 1988 [Refereed]
Modified Product Codes
Shigeichi Hirasawa, Masao Kasahara, Toshihiko Namekawa, Yasuo Swgiyama
IEEE Transactions on Information Theory 30 ( 2 ) 299 - 306 1984 [Refereed]
View Summary
By modifying product codes, a new coding scheme and its decoding method are proposed. Compared to a product code A, the first stage code A1of the new code AMis constructed in the same way as that of the code A except that it has at least one subcode, while the second stage codes A(2j] of the code AMare a set of codes with the same length and different rates. The new coding scheme has a sMuller upper bound on the probability of decoding error than the original product coding scheme for any given nonzero rate less than the capacity of a binary symmetric channel. An example is given for which the rate is increased compared with the original product code, at a fixed probability of decoding error, for a relatively short code length. © 1984 IEEE
An Improvement of Error Exponents at Low Rates for the Generalized Version of Concatenated Codes
Shigeichi Hirasawa, Masao Kasahara, Toshihiko Namekawa, Yasuo Sugiyama
IEEE Transactions on Information Theory 27 ( 3 ) 350 - 352 1981 [Refereed]
View Summary
The lower bound to the error exponent for the generalized version of concatenated codes is shown to be improved at low rates for binary-input memoryless channels. © 1981 IEEE
コンピュータ工学
培風館 2001
情報理論入門
培風館 2000
符号理論入門
培風館 1999
情報理論
培風館 1996
理工系のための計算機工学
昭晃堂 1990
情報理論の応用に関する研究
Studies on Information Theory Application
情報システムに関する研究
Studies on Information Systems
言語学習を対象とした学習状態把握による個別最適化学習システムの開発
日本学術振興会 科学研究費助成事業
Project Year :
梅澤 克之, 中澤 真, 平澤 茂一, 中野 美知子, 吉田 諭史
ビジネス価値創造のためのデータ解析プラットフォームと時変協調フィルタリングの研究
日本学術振興会 科学研究費助成事業
Project Year :
小林 学, 平澤 茂一, 松嶋 敏泰
View Summary
本研究ではデータを持ち出さずに解析を行うプラットフォーム(DAPF)の構築並びにその効果的な運用方法の提案を目的の一つとしている.現在までにクラウドを用いたDAPF(CDAPF),並びにオンプレミスによるDAPF(ODAPF)のそれぞれの構築が完了している.またCDAPFに関しては実運用を開始しており,現在みずほ銀行の実データの解析を行う3つのプロジェクトを実施している.これにより,DAPFの持つべき機能やセキュリティ等が明らかになってきている.またCDAPFについては2021年度に新たにVPN接続を可能にすることにより,大学外からセキュアにデータ解析を可能とした.さらにデータ持ち出しを防ぐためのチェック要件定義を行い,運用の効率化の実施を提案した.
また顧客の消費行動に対するビジネス価値創造のための機械学習に関して,昨今説明性が大変重要になってきている.具体的にはデータの構造推定結果や予測関数の意味などが分析者に理解できる説明性が求められてきている.2021年度はこの説明性に着目して,回帰並びに分類というプリミティブな問題に対して,回帰木(あるいは決定木)に対する生成モデルを設定し,この生成モデルにおけるベイズ最適な予測を決定関数として出力する方法を示し,さらにこれにブースティングを適用することにより,非常に複雑な問題にも良好な性能を示す手法を提案した.この手法は一般化加法モデルの一種となることを示すことができ,説明性を担保することが可能である.これによって時系列の影響を可視化することが可能であり,非定常な場合に拡張が可能になっている.
さらに,ディープラーニングを含む多値分類手法の複数の2値分類器の組合せで行われているが,これを効果的に行う誤り訂正出力符号(ECOC)が提案されている.この手法に対して理論的に最適な符号の構成,並びにその理論的な性能解析を行なった.
AIコーチによるプログラミング独習システム
日本学術振興会 科学研究費助成事業
Project Year :
斉藤 友彦, 平澤 茂一, 松嶋 敏泰
View Summary
本研究は「回答履歴から学習者の弱点を推定し適切な問題を推薦するプログラミング演習Webシステム」の構築を目的とする.このシステムは,ACM-ICPCやAtCoderをはじめとするプログラミングコンテストで上位を目指す高校生及び大学生を対象とする.なお,プログラミング言語はC及びJavaに対応する.プログラミング上達には,学習者が自分の弱点を的確に把握し,現状に適した演習問題を大量に解くことが最も大事である.本来,プログラミングと学習者を熟知した人間のコーチが専属で練習メニューを指示するのが理想であるが,本システムでは,あたかもAIがこれを代替しているかのようにふるまう.
本年度の主な成果はWebシステムやユーザインタフェースの実装に関するものである.1つ目はJavaScriptを使ったWeb学習支援を目的とした拡張機能の作成である.Web上での学習時に陥りやすい「脇道に逸れやすい」や「一度脇道に逸れたら学習に復帰しない」等の問題点を解決するため,学習予定設定,学習状態表示,学習離脱時警告表示,学習期限超過警告表示,学習パック作成・共有の機能を搭載したGoogle Chrome用拡張機能を作成した.2つ目はチャットボットを使った学習ツールの作成である.チャットボットとは,人間とコンピュータ間で話し言葉を用いて会話するソフトウェアである.本研究では,より人間のコーチと話しているかのような感じを実現するため,チャットボットによる学習ツールを開発し,その学習効果に関して検証を行った.
また,本研究の副産物として,機械学習やWebアプリ,教育工学に関する研究の知見を活かし, 仮想スタンプラリーを用いたオンラインイベントシステムやデータの可視化を用いた野球観戦システムの開発等を行った.
Development of a self-study system that can feel the learner's emotion beyond time and space for language learning
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
Umezawa Katsuyuki
View Summary
The purpose of this study is to develop and evaluate a self-study system equipped with an artificial teacher who senses the learner and gives advice, based on a unified framework for language learning. The term "sense the learner" means that the system side understands the learner's learning status. In this study, we developed a method and system for detecting learners' stumbling blocks and estimating their learning status, which are necessary for the development of a self-study system. We also conducted a demonstration experiment for learning English and programming languages using these methods and system. In addition, we investigated the relationship between English education and programming language education, and the use of these methods in education. Furthermore, we studied the possibility of substituting measurement instruments other than EEG for the dissemination of the results of this research.
Programming Education System Based on Big Data and Learning Analytics for Beginners
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
Kobayashi Manabu
View Summary
In this study, we built e-learning programming education WEB system for real class of beginners, and we analyzed the big data using learning analytics. Specifically, it is possible to obtain various programming logs of the learner and to perform scoring with high accuracy while significantly reducing the teacher work by automatically scoring the program using the machine learning method. Furthermore, we proposed a method for automatically extracting of learners and programming exercise tasks that show the same tendency of scoring results. Then we used the statistical collaborative filtering method. Furthermore, we showed the effectiveness of the proposed model and estimation method.
Development of Learner Model and Learning Support System using Context Awarenes
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
Nakazawa Makoto
View Summary
We have constructed an e-learning system capable of collecting various contexts. From the demonstration experiment using this, we clarified what behavioral patterns of the learners are signs of stumbling blocks of studying and lowering their interest. In particular, we constructed a learner model by clarifying the characteristic appearance patterns of contexts, such as the order of reading textbooks and error patterns in programming. Based on this model, we realized the system that can feed back the information necessary for teacher guidance of students.
Fundamental study on business analytics technologies on big data era
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
Goto Masayuki
View Summary
The objective of this study is to develop and deepen large-scale and diverse business data analytical technology (business analytics), propose new analytical models corresponding to various business data.
Specifically, we promoted research on the following individual themes: 1) development of data analytics technology for database information on EC sites, 2) development of analytical technique of marketing information accumulated as text data, 3) development of statistical model for recommendar systems, 4) Theoretical analysis of Web marketing model using information retrieval and recommendation technology, 5) Development of analytical method for high dimensional and sparse large scale data, 6) Development of privacy protection data analysis technology.
Big Data Classification Methods and Applications Based on Statistical Machine Learning and Convex Optimization
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
KOBAYASHI Manabu, HIRASAWA Shigeichi, YOSHIMOTO Masashi
View Summary
Applying classification methods based on the statistical machine learning and convex optimization for big data, we showed that it was possible to obtain efficiently the high precision solutions for wide range of various problems.
Specifically, we proposed algorithms and analysis methods, and showed the effectiveness for the following problems:
(1)privacy preserving distributed calculation problem for the case which some parties have different secret data, (2)latent class model analysis problems of EC site or institutional research, (3)dynamic reconfiguration circuit design problem, (4)document classification problem based on L1 optimization, (5)lossless data compression using CART, (6)fault-diagnosis problem using markov random field, and (7)programming edit history acquisition and visualization problem for many students.
Modelling of global literacy education at university and development of automated scoring system for its outcome
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
NAKANO Michiko, KONDO Yusuke, NAGAOKA Keizo, SONGMUANG Pokpong, YOSHIDA Satoshi, HIRASAWA Shigeichi, KOIZUMI Daiki, OWADA Kazuharu, UEDA Norifumi, OYA Masanori, SUGITA Yoshihito, TSUTSUI Eiichiro, NAKAZAWA Makoto
View Summary
The model of Global Literacy Education includes Communication Management, Learning Strategies, Team Work, Critical Thinking, Synthesis Skills, Analytic Skills, Problem Solving, Project Management and Diversity Management. Global Literacy education is realized in the package of three-staged courses: English Tutorials, Critical Reading and Academic Writing (critical thinking, analytic skills, synthesis skills and problem solving skills), Discussion English Tutorials (communication management, learning strategies and problem solving sills) and Cross-Cultural Distance Learning course (project management skills, problem solving skills. project management skills and diversity management skills). All of these courses are based on Common European Framework of Reference for Languages (CEFR). Course Contents are digitalized and provided with automatic assessment of learning outcomes, along with interactive widgets and learner logs to assist instructors to give personalized feedbacks.
Research and development of the next generation style of e-learning on the cloud computing environment
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
HIRASAWA Shigeichi, NAKANO Michiko, TAMAKI Kinya, NAKAZAWA Makoto, KOIZUMI Daiki
View Summary
This research project took the next generation style of e-learning on the cloud computing environment as main theme from Japanese financial year (FY) 2011 to 2013. In FY 2011, the project assumed an e-learning style that the learners make use of virtual desktop environment for e-learning. In FY 2012, it assumed an e-learning style that they are under e-learning environment with their digital textbooks. In FY 2013, it constructed a prototype of e-learning supporting system that the various logs of the learners can be preserved and visualized on the cloud computing environment. For the above e-learning styles including the prototype, the project enforced some experiments for their evaluations. Then, some interesting results were obtained and the prototype system's effectiveness was proven to some extent. The part of the obtained results was presented at the related conferences.
Modeling and optimization of the sensor networks based on the multiterminal information theory and decision theory
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
MATSUSHIMA Toshiyasu, SHIGEICHI Hirasawa
Application and Analysis of Parallel Iterative Algorithms for Calculating Posterior Probability to Practical Codes and Channels
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
View Summary
For its good error correcting ability, Turbo code and Turbo decoding algorithm, which was proposed in the research area of the error correcting codes, have gotten a lot of attention recently as important technology of 21st century's reliable communication and accumulation technique. It have been known that the iterative decoding algorithms which includes Turbo decoding algorithm can be regarded as applications of Belief Propagation algorithm, which was proposed in the research area of the knowledge information processing as uncertain reasoning algorithm. It can be interpreted that the turbo decoding algorithm calculates the approximation of posterior probability effectively.
It was obtained that the Extended Junction Graph (EJG) and the parallel iterative algorithm for calculating posterior probability, which are results achieved in "The Analysis and Design of the Reliable Iterative Decoding Algorithm based on Uncertain Reasoning" which is supported by Grant-in-Aid for Scientific Research in 2000-2002, can deal with general probabilistic models compared to existing belief propagation algorithm and retain the higher performance even when the graph has loops (almost of the practical codes have loops).
In this research, we have done following,
(1)We applied the EJG and the parallel iterative algorithm to the LDPC codes which have some small loops, we sought further efficiency and analyzed the performance.
(2)We applied the algorithm to the channel models which have feedback channel such as ARQ Scheme and analyzed the performance.
(3)Moreover, we applied the algorithm to the decryption of stream cipher and analyzed the performance.
The Analysis and Design of the Reliable Iterative Decoding Algorithm based on Uncertain Reasoning
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
View Summary
1. We analyzed and generalized our original reasoning algorithm, which is already proposed, based on both Bayesian Network and Graphical Model. Then, we designed a new algorithm for calculating posterior probability on probability models including classes of efficient codes. We also inspected the algorithm in terms of both theoretical and simulating points and then got good results.
2. Moreover, we inspected both convergence and accuracy of the algorithm in terms of the differential geometry, machine learning and statistics and then we gave theoretical understandings onto it.
3. We defined the generalized posterior probability distribution and made mathematical expression of a problem of the uncertain reasoning given the observed probability distribution. It turned out that the result of our reasoning method has a lot of mathematical meanings that is very important. We improved our algorithm by reducing both the calculation and memory occupation, and by paralyzing its procedures.
4. We applied our algorithm on the Extended Junction Graph, which is also our generalized original graph, then proposed the efficient parallel belief propagation algorithm. As a typical application, we applied our algorithm, which is already mentioned in 2., for decoding on Extended Junction Graph constructed by LDPC codes. We examined its natures in terms of both theoretical and experimental aspects.
5. We used our Extended Junction Graph for both convolution codes and tail biting codes, then we applied our algorithm mentioned in 2. for decoding both codes. We examined its results from the both mathematical and computational points of view.
6. We applied our generalized algorithm for calculating posterior probability, which is already proposed year 2001, to a special model class that has guarantee of convergence. The whole procedure guarantees mathematical meanings as well as both efficient calculation and memory occupation.
7. We developed out results in year 2001 on decoding algorithm, then inspected from point of theoretical and experimental view.
符号化技術とその情報システムへの応用
日本学術振興会 科学研究費助成事業
Project Year :
平澤 茂一
View Summary
今年度目標とした項目に対し,下記の結果を得た.
1.データマイニングとテキストマイニング
新聞記事などのテキストデータについて,その自動分類法と,特に複合語に着目し分類精度を向上させる方式を提案した。また,大規模データベースにおいて属性値が連続値をとる場合に属性間の相関を考慮した離散化の方法を提案し,いずれも信学会・ソサエティ大会で発表した.定型項目と記述項目が混在するアンケートデータからの相関ルールを抽出する新しい方法を提案した.エントロピーを用いて多値属性値の出現頻度から複雑さを評価し,これを最小支持度の修正に反映する.これはテキスト中の単語の出現頻度に対しても合理的に取り扱うことが出来る方法になっており、実データに適用しその有効性を明らかにした.情報理論とその応用シンポジウムで発表した.
2.符号化技術
ブロックターボ符号におけるインタリーバの有効な設計方法を提案した.置換行列に制約条件を加えることにより,構成されたターボ符号の最小距離を増加させることができる.そのときの条件・置換行列の求め方・要素符号を与えたときの最小距離の導出などを具体的に与え評価した.また,リスト復号法を用いた連接符号の復号アルゴリズムの提案・評価,および周期的畳み込み符号の新しい構成法とその性能の導出を行った.いずれも情報理論とその応用シンポジウムで発表した.
Application of coding techniques to multimedia
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
HIRASAWA Shigeichi, INAZUMI Hiroshige, TAJIMA Masato, NISHIJIMA Toshihisa, KOHNOSU Toshiyuki, MATSUSHIMA Toshiyasu
View Summary
(1) (1)We studied data compression for text in source coding. We theoretically derived difference in code lengths between the codes based on the MDL principle and the Bays codes. Then these results were applied to the model selection problem.
(2)We developed an approximately optimal algorithm which reduces complexity in calculation for the Bays code. We applied this algorithm to benchmark test, files, and so on, which gave good results. As data compression for speech, image, and so on, we also proposed an algorithm which applied the trellis codes to the source coding problem, and performed an evaluation experiment for this algorithm.
(2) (1)In channel coding, we developed an algorithm on decoding beyond the BCH bound for BCH codes, and also an algorithm on soft decision decoding for block codes. From the viewpoint of complexity in calculation, we found a practical decoding method.
(2)For convolutional codes, the relation between the Viterbi decoding method and the syndrome decoding method was elucidated. And these results were applied to the ARQ method. We found a decoding method having excellent performance at low rate.
(3)We performed signature analysis by parallel processing as an application of the error-correcting codes to test methods of LSI.
Management and access control method for secret documents in the public database center
Japan Society for the Promotion of Science Grants-in-Aid for Scientific Research
Project Year :
TOMINAGA Hideyoshi, SUZUKI Joe, KAMEYAMA Wataru, KOMATSU Naohisa, HIRASAWA Shigeichi, KASAHARA Masao
View Summary
In these a few years, there are a lot of workstations and termiinals connected into single universal network. These workstations will proceed a lot of variety documents such as very personal information, published data, electric magazine, private mail and so on. Establishing a public common database connected to open network will be important subject to realize a high efficient system shared by mass of customers at reasonable cost. A safe-deposit box is very convenient facility, because many kind of assets and important docments of indivisuals are stored with hight security at reasonable cost. The same function of this system will bave to be exist in future information society in the single universal network system. We have proposed secure document image management system which manages private information at resonable cost with high security.
To realize real secure system in such open facility, there are two main problem. They are leakage and illegal occupation. The leakage means the content of document can be thrown out by somebody who is not the autliorized person. The illegal occupation means an injured function of database management occurred by mass of prescribed message that have been kept and are increasing in the database storage.
To solve above problems, we have studied following items.
1. Concreat system configuration and basic functions
2. Network access protocol in deposit and takeout procedure to keep high security
3. Database management protocol for secure document management
4. Customer identificaion scheme using feature of physical characteristics such as sensed data of fingerprint
5. Secure key sharing schemes using smartcard with no pre-communication
6. Scramble algorithm for document image using Soace Filling Curve
And we have constructed real simulation system using G4 FAX, ISDN network, workstations and smart card system in order to evaluate the protocols of proposed system at the point of security atid efficiency.
ディジタルライブラリの基礎と応用
大規模データベースからの知識発見
情報システムの最適設計
符号化技術の方式開発と応用
経営工学における知識情報処理
情報理論の応用に関する研究
符号化の構成と評価に関する研究
On knowledge information processing systems for management science and engineering
Studies on applications of information theory
Studies on code construction methods and their performance evaluation
Performance Evaluation of ECOC Considering Estimated Probability of Binary Classifiers
Gendo Kumoi, Hideki Yagi, Manabu Kobayashi, Masayuki Goto, Shigeichi Hirasawa
Proceeding of 10th World Conference on Information Systems and Technologies 2022 (WORLD CIST 2022) 2022.04 [Refereed]
Research paper, summary (international conference)
A Study on the Optimization of the ECOC Method for Multi-label Classification Problems
14 ( 3 ) 1 - 10 2021.08
View Summary
One of the methods for constructing a multi-valued classifier that uses a combination of given two-valued classifiers is the Error-Correcting Output Coding (ECOC) method, which is based on error-correcting codes introducing a code theory framework. Although it is experimentally known that this method performs well on real data, the theoretical optimality of the classification accuracy for the ECOC method has not been clarified. In this study, we show sufficient conditions for the ECOC method to be an optimal multi-valued classification method under the assumption that binary classifiers achieve maximum posterior probability classification. As a result, we can show that n-vs-all and Exhaustive signs are the best multi-valued classification method under the same assumptions. This suggests one of the directions of the optimization debate for various ECOC methods.
混合ノルム正則化に基づくFactorization Machineに関する一考察
三川健太, 小林学, 後藤正幸, 平澤茂一
日本経営工学会春季大会予稿集(Web) 2019 2019
An Evaluation of an Effective Flipped Classroom based on Log Information of Self-study
116 ( 438 ) 1 - 6 2017.01
人工データを用いた誤り訂正符号に基づく多値分類法における符号語表構成に関する一考察
雲居玄道, 三川健太, 八木秀樹, 後藤正幸, 平澤茂一
情報理論とその応用シンポジウム予稿集(CD-ROM) 40th 2017
編集履歴可視化システムを用いたLearning Analytics 〜 Cプログラミング科目における編集履歴と評価得点データを統合した分析モデル
後藤 正幸, 三川 健太, 雲居 玄道, 小林 学, 荒本 道隆, 平澤 茂一
第78回全国大会講演論文集 2016 ( 1 ) 533 - 534 2016.03
View Summary
本講演では、Cプログラミングを用いて基本的な情報処理の方法やアルゴリズムの組み立て方を学ぶ「情報処理基礎演習」の授業を対象とし、編集履歴可視化システムの具体的な活用事例について報告する。具体的には、荒本らによって開発された編集履歴可視化システムを活用して、学生のプログラム作成時の編集履歴データを取得すると共に、演習授業中に実施する小テストやレポート、及び試験得点といったこれまでの教学データを統合し、教育支援に役立てる試みについて検討する。これにより、プログラミング系科目におけるLearning Analyticsの具体的な方法論や活用し得る分析モデルについて明らかにする。
Linear Programming Decoding of Binary Linear Codes over Symbol-Pair Read Channels
115 ( 214 ) 1 - 6 2015.09
On relations between DS2 bound and Sulman-Feder bound for Forney's Generalized Decoding
NIINOMI Toshihiro, YAGI Hideki, HIRASAWA Shigeichi
IEICE technical report. Information theory 114 ( 224 ) 13 - 18 2014.09
View Summary
For maximum likelihood decoding, Shamai and Sason have pointed out that various upper bounds which were known are a special case of the DS2 bound. Then, recently, Hof et al. extended this discussion to generalized decoding (GD), which was introduced by Forney with decision criterion FR. Hof et al. also derived the Sulman-Feder bound for GD (GD-SF) for the binary-input output-symmetric channel under some proper conditions. However, this bound employs one parameter, though Forney's original upper bound has two parameters. In this paper, by choosing a proper function g(y), we derive a new GD-SF using two parameters and clarify this bound is tighter than Hof et al.'s GD-SF.
梅澤克之, 石田崇, 小林学, 平澤茂一
第76回全国大会講演論文集 2014 ( 1 ) 355 - 357 2014.03
View Summary
本研究では,大学教育用の教科書の作成を通して,マルテメディアコンテンツやインタラクティブコンテンツを含む電子教材の開発方針の検討を行う。
Iterative Multiuser Joint Decoding based on Augmented Lagrangian Method
HORII Shunsuke, SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 113 ( 228 ) 13 - 17 2013.09
View Summary
In this paper, we develop an iterative multiuser joint decoding of code-division multiple-access (CDMA) signals based on a distributed optimization algorithm. For the joint decoding problem, decoding algorithm based on the turbo principle is widely used. The algorithm consists of soft-input soft-output (SISO) channel decoder and SISO multiuser detector and it can be derived as an application of the sum-product algorithm. On the other hand, in the research area of error correcting codes, the decoding algorithm based on convex optimization has been attracting a great deal of attention. Decoding algorithm based on linear programming (LP) has decoding error rate which is comparable with sum-product algorithm with stronger theoretical guarantees. We formulate the joint decoding problem of CDMA signals as an convex optimization problem and we present a relax form of the problem. Moreover, we propose a distributed algorithm which efficiently solves the relaxed optimization problem. The proposed algorithm is based on alternating direction method of multipliers (ADMM).We also see the performance of the proposed decoder through numerical simulations.
Lossless Image Data Compression Based on Regression Trees
KOBAYASHI Manabu, HIRASAWA Shigeichi
IEICE technical report. Nonlinear problems 113 ( 116 ) 117 - 122 2013.07
View Summary
It is well known that the random forests proposed by L. Breiman in 2001 achieve the high accuracy for classification problems and regression problems. This method generates many regression trees by using CART for the regression problem. In this manuscript, we proopse the lossless image data compression based on regression trees.Then we treat previously compressed pixels as explaining variables, and treat the next pixel to compress as the objective variable. First, the image data to compress is "learned" by CART or random forests, and the output regression trees are coded. Next, each pixel data is predicted by using these regression trees, and each difference between the prediction value and the actual pixel value is compressed. Finally, we show that the proposed method can reduce compression ratio in comparison with PNG format.
Variable Order Transition Probability Markov Decision Process for the Recommendation System
6 ( 1 ) 20 - 30 2013.03
Constructions of A Large Class of Optimum Linear Constant Weight Codes
KASAHARA Masao, HIRASAWA Shigeichi
IEICE technical report. WBS, Wideband System 112 ( 462 ) 53 - 58 2013.03
View Summary
A new method of constructing optimum linear constant weight codes based on various generalization of (u, u+v) construction. We have presented a new method of constructing superimposed code C^<(h_1,h_2…,h_1)>_<(s_1,s_2,…,s_1)> and presented a large class of optimum linear constant weight codes. We present large classes of optimum linear constant weight codes for k=2 and k=3 for n≦128. We also present optimum linear constant weight codes that meet BV bound given k=2, 3, 4,5 and 6, for n≦128. The author would like to present the following conjecture: ・C^(h1)_(s1) given in this paper yields the optimum linear constant weight code at the code-length n=3h_1, number of information symbols k=2 and minimum distance d=2h_1 for any h_1. ・C^<(h1)>_<(s1)> yields the optimum linear constant weight code at n=7h_1, k=3 and d=4h_1 for any h_1. ・Code C^<(h1,h2…,h1)>_<(s1,s2,…,s1)> yields optimum linear constant weight code of length n=2^k-2, number of information symbols k and minimum distance d=2^<k-1>
Constructions of A Large Class of Optimum Linear Constant Weight Codes
KASAHARA Masao, HIRASAWA Shigeichi
Technical report of IEICE. ISEC 112 ( 461 ) 53 - 58 2013.03
View Summary
A new method of constructing optimum linear constant weight codes based on various generalization of (u, u+v) construction. We have presented a new method of constructing superimposed code C^<(h_1,h_2…,h_1)>_<(s_1,s_2,…,s_1)> and presented a large class of optimum linear constant weight codes. We present large classes of optimum linear constant weight codes for k=2 and k=3 for n≦128. We also present optimum linear constant weight codes that meet BV bound given k=2, 3, 4,5 and 6, for n≦128. The author would like to present the following conjecture: ・C^(h1)_(s1) given in this paper yields the optimum linear constant weight code at the code-length n=3h_1, number of information symbols k=2 and minimum distance d=2h_1 for any h_1. ・C^<(h1)>_<(s1)> yields the optimum linear constant weight code at n=7h_1, k=3 and d=4h_1 for any h_1. ・Code C^<(h1,h2…,h1)>_<(s1,s2,…,s1)> yields optimum linear constant weight code of length n=2^k-2, number of information symbols k and minimum distance d=2^<k-1>
大学教育のための電子教材の試作 ~ 情報数理教育向けインタラクティブコンテンツ ~
石田崇, 小林学, 梅澤克之, 平澤茂一
第75回全国大会講演論文集 2013 ( 1 ) 471 - 472 2013.03
View Summary
本稿では情報数理教育を対象として,インタラクティブなコンテンツに対する電子教材の試作を検討する.具体的にはRSA暗号に対するインタラクティブな教育コンテンツを作成し,評価を行う.このとき,マルチプラットフォーム化を念頭におき,コンテンツはHTML,CSS,JavaScriptのみを使用して構築する.数式の表示はMathJaxにより行う.さらにPhoneGapを用いてマルチプラットフォームに対応可能なアプリ化を検討する.
大学教育のための電子教材の試作 ~ マルチメディアコンテンツの活用 ~
梅澤克之, 小林学, 石田崇, 平澤茂一
第75回全国大会講演論文集 2013 ( 1 ) 469 - 470 2013.03
View Summary
本研究では、大学教育用の教科書としての電子書籍の試作を行い、電子書籍内のマルチメディアコンテンツが内容理解に与える影響の評価を行なう。具体的には、教科書の内容を補助するような動画や音声を配した電子書籍を試作し、ユーザアンケートによって教育効果を評価する。
Relation between the Fourier Coefficients and the Interaction Effects in the Experimental Design
Y. Ukita, T. Matsushima, S. Hirasawa
Journal of Communication and Computer 10 ( 2 ) 169 - 177 2013.02 [Refereed]
SUKO Tota, HORII Shunsuke, KOBAYASHI Manabu, GOTO Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
112 ( 279 ) 107 - 111 2012.10
View Summary
In this paper, we study a privacy preserving linear regression analysis. We propose a new protocol of distributed calculation method that calculates a least squares estimator, in case that two parties have different types of explanatory variables. We show security of privacy in the proposed protocol. Because the protocol have iterative calculation, we evaluate the number of iterations with numerical experiments. Finally, we show an extended protocol that is a distributed calculation method for κ parties.
A Proposal of Adaptive Metric Learning Using Category Information for Text Classification
MIKAWA Kenta, ISHIDA Takashi, GOTO Masayuki, HIRASAWA Shigeichi
112 ( 279 ) 83 - 88 2012.10
View Summary
Extended cosine measure has been proposed as one of the method of metric learning which learns metric matrix expressing the characteristics of training data. However, this method introduces a unique metric matrix and estimate it by learning of all training data. Therefore, there is a room to improve this method because document data has normally different statistical characteristics in each category. In this study, we propose the way of learning metric matrices for each category. To show the effectiveness of our proposed method, simulation experiment is conducted.
A Quantizer Design Method Based on Mutual Information Criteria for MIMO System
KOBAYASHI Manabu, YAGI Hideki, NINOMIYA Hiroshi, HIRASAWA Shigeichi
IEICE technical report. Information theory 112 ( 215 ) 59 - 64 2012.09
View Summary
B. M. Kurkoski et al. have proposed a quantizer design method based on the maximization of the mutual information.Furthermore, an application method for the decoding of LDPC codes has been proposed.This method has the constraints of binary-input discrete memoryless channels, and it is difficult to apply for general channels directly.In this study, we focus on MIMO communication systems and consider a quantizer design method based on the mutual information criteria.Then we propose a design method maximizing the mutual information by the conjugate gradient method.
A Learning Algorithm of Threshold Value on the Automatic Detection of SQL Injection Attack
2012 ( 10 ) 1 - 6 2012.07
Variable Order Transition Probability Markov Decision Process for the Recommendation System
2012 ( 8 ) 1 - 6 2012.05
Fault Diagnosis Algorithm Based on Linear Programming in Multicomputer Systems
KOBAYASHI Manabu, HORII Shunsuke, TAKABATAKE Toshinori, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics, Information and Communication Engineers. A 95 ( 4 ) 375 - 378 2012.04
平澤茂一, 石田崇, 雲居玄道, 後藤正幸
全国大会講演論文集 2012 ( 1 ) 423 - 425 2012.03
View Summary
サイバー大学IT総合学部の専門必修科目「コンピュータ入門」の学生アンケートを分析した.ここで,アンケートは選択型の質問と記述型の質問が混在している.授業モデルを提案し,PLSIモデルを用いて文書のクラスタリング・分類,および重要文抽出・特徴語抽出を行い,授業改善に結びつくヒントを得た.
石田崇, 畑上英毅, 後藤正幸, 平澤茂一
全国大会講演論文集 2012 ( 1 ) 425 - 427 2012.03
View Summary
本稿では,フルオンデマンド形式の情報系科目で実施された学生アンケートの分析結果について報告する.対象とするのはすべての授業がインターネット上で実施され通学を必要としない4年制大学におけるコンピュータ関連の科目である.従来の通学制とは異なる新しい授業形態であることから,効果的な授業のあり方や授業改善についても新しい取り組みが必要になると考えられる.そこで本研究では,通学制の大学において同内容の授業で実施された学生アンケートとの比較分析により,フルオンデマンド授業の特性について検討を行う.
小林学, 後藤正幸, 松嶋敏泰, 平澤茂一
全国大会講演論文集 2012 ( 1 ) 61 - 63 2012.03
View Summary
カテゴリが既知の学習データを用いて,新規データのカテゴリを推定する自動分類問題は,サポートベクターマシンなど数多く研究が行われている.本研究では,カテゴリ中に発生するデータがマルコフモデルに従って発生すると仮定したときに,学習データを用いて分類誤り確率を推定する手法を提案する.
実験計画法に適した直交配列の線形計画限界
斉藤 友彦, 浮田 善文, 松嶋 敏泰, 平澤 茂一
情報処理学会第74回全国大会 261 - 262 2012.03
A Note on ANOVA in an Experimental Design Model Based on an Orthonormal System
Yoshifumi Ukita, Toshiyasu Matsushima, Shigeichi Hirasawa
PROCEEDINGS 2012 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 1990 - 1995 2012 [Refereed]
View Summary
Experiments usually aim to study how changes in various factors affect the response variable of interest. Since the model used most often at present in experimental design is expressed through the effect of each factor, it is easy to understand how each factor affects the response variable. However, since the model contains redundant parameters, a considerable amount of time is often necessary to implement the procedure for estimating the effects. On the other hand, it has recently been shown that the model in experimental design can also be expressed in terms of an orthonormal system. In this case, the model contains no redundant parameters. Moreover, the theorem with respect to the sum of squares for the 2-factor interaction, needed in the analysis of variance (ANOVA) has been obtained. However, 3-factor interaction is often to be considered in real cases, but the theorem with respect to the sum of squares for the 3-factor interaction has not been obtained up to now. In this paper, we present the theorem with respect to the sum of squares for the 3-factor interaction in a model based on an orthonormal system. Furthermore, we can also obtain the theorem for interactions with 4 or more factors by the similar proof. Hence, in any real case, we can execute ANOVA in the model based on an orthonormal system.
A Note on Relation Between the Fourier Coefficients and the Interaction Effects in the Experimental Design
Yoshifumi Ukita, Toshiyasu Matsushima, Shigeichi Hirasawa
2012 4TH INTERNATIONAL CONFERENCE ON INTELLIGENT AND ADVANCED SYSTEMS (ICIAS), VOLS 1-2 604 - 609 2012 [Refereed]
View Summary
In this paper, we present the theorem of the relation between the Fourier coefficients and the 3-factor interaction effects. From this theorem, the 3-factor interaction effects can be easily obtained from the Fourier coefficients. Therefore, it is possible to implement the estimation procedures easily as well as to understand how any interaction affects the response variable in the model based on an orthonormal system.
An Error Probability Analysis of the Text Classification Using the CTW Algorithm
KOBAYASHI Manabu, GOTO Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Nonlinear problems 111 ( 276 ) 109 - 114 2011.11
View Summary
The text classification problem has been investigated by various techniques, such as a vector space model, a support vector machine, and so on. On the other hand, the Context-Tree Weighting (CTW) algorithm has been proposed as an outstanding data compression. Furthermore, experimental results have been reported using the CTW algorithm for the text classification. In this paper, we assume that each document with same category arises from one stochastic model for the text classification using the CTW algorithm. Then we propose an analysis method to obtain the classification error probability for the document with the finite length.
KOBAYASHI Manabu, TAKABATAKE Toshinori, HIRASAWA Shigeichi
The IEICE transactions on information and systems 94 ( 4 ) 671 - 684 2011.04
Disk Allocation Methods for Cartesian Product Files by using Unequal Error Protection Codes
SAITO Tomohiko, INAZUMI Hiroshige, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 110 ( 363 ) 7 - 12 2011.01
View Summary
Allocation methods for Cartesian product files on multiple disks by using linear error-correcting codes were proposed. In this paper, we propose an allocation method using unequal error protection(UEP) codes. Code-words of an UEP code have some special bits which are protected against a greater number of errors than other bits. We firstly assume a model that "*", which means "don't care". appear with different probability in each attribute of queries. In this case, the average response time can be calculated by using the split distance distribution. Then, we calculate the average response time of the allocation method using UEP codes, and we show the effectiveness of this method from numerical examples.
A Note on the Inference Algorithm on the Factor Graph based on the Linear Programming
HORII Shunsuke, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 110 ( 363 ) 55 - 60 2011.01
View Summary
Probabilisitc inference problem on the graphical model is very important since it is arising in many applications which include theory of error-correcting codes, image processing, speech recognition, and so on. Recently, linear programming based decoding algorithm for the error-correcting code has been receiving a lot of attention. Viewing the decoding problem as an example of the probabilistic inference problem on the graphical model, the factor graph corresponds to the problem has some specific structure. The functions in the factor graph can be classified into two classes, indicator functions and non-indicator functions. For the graph corresponds to the decoding problem, each non-indicator function is connected to only one variable node. On the other hand, the factor graph corresponds to the general probabilistic inference problems possibly have non-indicator functions which is connected to more than one variable nodes. The aim of this study is to develop the linear programming based inference algorithm for general inference problems.
Efficient Encoding of Generalized LDPC Codes
TERAMOTO Kenichi, HOSOYA Gou, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report 110 ( 363 ) 79 - 84 2011.01
View Summary
Encoding complexities of LDPC codes and Generalized LDPC (GLDPC) codes, which are constructed by combining the ordinarily LDPC codes and linear block codes, are proportional to the square of the code length when using their generator matrices. J. Lu et.al. have proposed a linear time encodable method called "labe-and-decide algorithm" which decomposes Tanner graphs of arbitrary class of randomly constructed LDPC codes into a hierarchical structure to calculate parity bits from information bits. In this study, we propose an efficient encoding method for arbitrary class of randomly constructed GLDPC codes. It applies Dulmage-Mendelsohn decomposition for both the parity-check matrix and base code of GLDPC codes and cancalculate the parity bits by using pseudo inverse matrix of the component codes of GLDPC codes. We show the effectiveness of the proposed encoding algorithm by deriving the maximum number of encoding operation.
An Improved Bit Flipping Decoding Algorithm based on Messages of Binary Sequence
TANIGUCHI Yuki, HOSOYA Gou, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report 110 ( 363 ) 61 - 66 2011.01
View Summary
Though the iterative threshold of Bit-flipping (BF) decoding is less than that of belief-propagation (BP) decoding, BF decoding algorithm is an effective method compared with the BP decoding algorithm in term of the decoding complexity. It is verified that the BF decoding shows an error floor when the channel error probability is small. To overcome this problem, S.K. Planjery, et. al. have proposed three bits BF decoding algorithm whose messages are combined with an ordinarily binary message and two bits message which represents a reliability of a codeword bit. Although the decoding performance of three bits BF decoding on an error floor region is greater than that of the BF decoding, its decoding rule is so complicated that it can be applied to for only the regular LDPC codes with variable node degree 3. In this paper, we propose a new three bit BF decoding algorithm applicable for the irregular LDPC codes. We also show by simulation results that the performance of the proposed new three bits BF decoding algorithm rule is better than that of the ordinarily three bits BF decoding algorithm. We also analytically derive the performance of both three bits BF decoding algorithms and compared them to show the effectiveness of our algorithm by the density evolution.
A Note on the Degrees of Freedom in an Experimental Design Model Based on an Orthonormal System
Yoshifumi Ukita, Toshiyasu Matsushima, Shigeichi Hirasawa
2011 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC) 2181 - 2185 2011 [Refereed]
View Summary
Experiments usually aim to study how changes in various factors affect the response variable of interest. Since the response model used most often at present in experimental design is expressed through the effect of each factor, it is straightforward to ascertain how each factor affects the response variable. However, since the response model contains redundant parameters, we must calculate the degrees of freedom defined by the number of independent parameters in the analysis of variance. In this paper, we show that through a description of experimental design based on an orthonormal system, the response model can be expressed using only independent parameters. Hence, we do not have to calculate the degrees of freedom defined by the number of independent parameters.
A Note on Documents Classification using PLSI
KUMOI Gendo, ISHIDA Takashi, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report 110 ( 301 ) 13 - 18 2010.11
View Summary
Recently, with increasing electric documents, effective knowledge acquisition become important issue method. Document classification is one of the techniques. In this paper, we use Probabilistic Latent Semantic Indexing (PLSI) as document classification technique. The characteristics of PLSI is dealtwith handle multi-category and lead solution with using EM algorithm. We propose a method that can classify a small amount of training data using the EM algorithm. We apply this method into WEB page and show its effectiveness.
Estimation of the Effects in the Experimental Design using Fourier Transforms
Y. Ukita, T. Matsushima, S. Hirasawa
IEICE Trans. Fundamentals E93-A ( 11 ) 2077 - 2082 2010.11 [Refereed]
Linear Programming Decoding of Binary Linear Codes for Multiple-Access Channel
HORII Shunsuke, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 110 ( 205 ) 31 - 36 2010.09
View Summary
In this paper, we develop a linear-programming (LP) decoding for the multiple-access channel with binary linear codes. For the single-user channel, LP decoding has been attracting much attention in recent years as a good approximation to the maximum likelihood decoding. We demonstrate that how the maximum likelihood decoding problem for the multiple-access channel with binary linear codes can be formulated as a linear programming problem. It is not straightforward because the objective function of the problem is a non-linear function of the codeword symbols in general. We introduce the auxiliary variables such that the objective function is a linear function of those variables. Then the ML decoding problem reduces to the LP problem however, it is too complex for practical implementation. As the case for the single-user channel, we formulate the relaxed LP problem. Just as in the case of the single-user channel, the proposed decoder has the desirable property called ML certificate property, that is, if the decoder outputs integer solution, it is guaranteed to be the ML codeword. The computational complexity of the proposed algorithm is the exponential order of the number of users, however, we can reduce the computational complexity of the algorithm for the gaussian multiple-access channel. Furthermore, we compare the performance of the proposed decoder with the decoder based on the sum-product algorithm.
GOTO Masayuki, ISHIDA Takashi, SUZUKI Makoto, HIRASAWA Shigeichi
Journal of Japan Industrial Management Association 61 ( 3 ) 97 - 106 2010.08
View Summary
Problems associated with document classification, an important application of text mining of text data, are focused on in this paper. There have been many models and algorithms proposed for text classification; one of these is a technique using a vector space model. In these methods, a digital document is represented as a point in the vector space which is constructed by morphological analysis and counting the frequency of each word in the document. In the vector space model, the documents can be classified using the distance measure between documents. However, there are specific characteristics in the vector space model for document classification. Firstly, it is not easy to automatically remove unnecessary words completely. The existence of unnecessary words is one of the characteristics of the text mining problems. Secondly, the dimensions of the word vector space are usually huge in comparison to the number of words appearing in a document. Although the frequencies of words appearing in a document could be small in many cases, many kinds of such words with small frequency can usually be used to classify the documents. In this paper, we evaluate the performance of document classification in the case where unnecessary words are included in the word set. Moreover, the performance of the distance measure between documents in a large dimensional word vector space is analyzed. From the asymptotic results about the distance measure, we can provide an explanation of the fact given in many experiments that classification using the empirical distance between documents calculated via the cosine measure is not particularly bad. It is also suggested that the KL-divergence is not useful for text mining problems.
Probabilistic Fault Diagnosis and Its Analysis in Large Scale Multiprocessor Systems
KOBAYASHI Manabu, TAKABATAKE Toshinori, AMANO Shinya, HIRASAWA Shigeichi
The IEICE transactions on information and systems 93 ( 8 ) 1602 - 1613 2010.08
Probabilistic Fault Diagnosis in Large Scale Multicomputer Systems
KOBAYASHI Manabu, TAKABATAKE Toshinori, AMANO Shinya, HIRASAWA Shigeichi
IEICE technical report 110 ( 82 ) 205 - 210 2010.06
View Summary
F.P.Preparata et al. have proposed a fault diagnosis model to find all fault nodes in the multiprocessor system by using outcomes which each of nodes tests some other nodes. In this paper, for probabilistic diagnosis models, we show an efficient diagnosis algorithm to obtain a posteriori probability that each of nodes is faulty given the syndrome. Furthermore, we propose a method to analyze the diagnostic error probability of this algorithm.
Linear Programming Bounds of Orthogonal Arrays for Experimental Designs
T. Saito, Y. Ukita, T. Matsushima, S. Hirasawa
IEEE African Winter School on ITC 24 - 24 2010.06 [Refereed]
A Note on a Sampling Theorem for Functions over GF(q)^n Domain
Y. Ukita, T. Saito, T. Matsushima, S. Hirasawa
IEICE Trans. Fundamentals E93-A ( 6 ) 1024 - 1031 2010.06 [Refereed]
View Summary
In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.
Constructions of A Large Class of Optimum Linear Codes for Small Number of Information Symbols
KASAHARA Masao, HIRASAWA Shigeichi
IEICE technical report 109 ( 446 ) 419 - 426 2010.02
View Summary
A new method of constructing efficient codes on the basis of (u,u+v)construction is presented. We present a large class of optimum linear codes for small number of information symbols. Namely we present (3(2^m-1),m+1,3・2^m-2) code, (3(2^m-1)+1,m+1,3・2^m-1) code and (3(2^m-1)+2,m+1,3・2^m) code respectively. We show that these codes meet the bound of linear codes (m≦6) due to Brouwer and Verhoeff. It is strongly conjectured that, for m>6, our codes meet the bound for any code-length. We then present the (n,2,d) codes that meet the bound of any linear code of length n≦125. It is strongly conjectured that the proposed (n,2,d) codes meet the bound of optimum linear code of any code length. We also present the (n,3,d) code that meet the bound of any linear code of length n except n=8+7μ for n≦60. It is strongly conjectured that our (n,3,d) code meet the bound for any code length except m=8+7μ(μ=1,2,…). Finally we present several optimum linear codes for the number of information symbols k=4,5,6 and 7.
A Note on Analysing LDPC Codes for Correcting a Burst Erasure
HOSOYA Gou, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 109 ( 357 ) 7 - 10 2009.12
View Summary
A lower bound on a probability of a burst erasure correction capability of regular low-density parity-check (LDPC) code ensemble under the belief-propagation decoding is investigated. This bound is based on the second moment method which expresses the upper bound on the minimum span for stopping sets of the regular LDPC code ensemble.
Efficiently Encodable Generalized LDPC Codes using a Structure of its Subcodes
TERAMOTO Kenichi, HOSOYA Gou, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report 109 ( 143 ) 25 - 30 2009.07
View Summary
The combination of the low-density parity-check (LDPC) codes and the iterative decoding algorithm has high performance. It has been shown that Generalized LDPC (GLDPC) codes, which are constructed by conbining the ordinarily LDPC codes and linear block codes, are asymptotically good codes and achieve channel capacity almost nearly. Unfortunately encoding complexity of these codes is proportional to the square of the code length. In this paper, we propose a linear time encodable GLDPC codes by applying systemtic structure for its subcodes, and the stair-like structure of the extended Irregular Repeat Accumulate (eIRA) codes for its base code. We show by some numerical example that the number of encoding operations for the proposed GLDPC codes is less than those of the conventional ones. We also show by simulation results that the decoding performances of the proposed GLDPC codes are almost the same as those for the ordinarily codes.
TANIGUCHI Yuki, HOSOYA Gou, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report 109 ( 143 ) 31 - 36 2009.07
View Summary
The performance of a combination of the irregular low-density parity-check (LDPC) codes and the belief-propagation (BP) decoding algorithm strongly depends on a structure of their parity-check matrixes, and some of the codes exhibit bad performance. In this paper, we propose methods for constructing structured irregular LDPC codes which effectively facilitate for the BP decoding algorithm. We also develop methods for effective updating rules according to the bit node positions for the Shuffled BP decoding algorithm. We also show by simulation results that the decoding performances of the proposed irregular LDPC codes are better than those of the conventional ones.
Bayesian Forecasting of WWW Traffic on the Time Varying Poisson Model
KOIZUMI DAIKI, MATSUSHIMA TOSHIYASU, HIRASAWA SHIGEICHI
2009 ( 5 ) 1 - 6 2009.07
Constructions of Irregular LDPC Code for a Burst Erasure
HOSOYA Gou, KOBAYASHI Manabu, HIRASAWA Shigeichi
IEICE technical report 108 ( 473 ) 447 - 452 2009.03
View Summary
We develope two methodology of constructing irregular low-density parity-check (LDPC) code for correcting a burst erasure. With high probability, the performance degradation is caused by producing small size of stopping sets of many degree two variable nodes. In this paper we present effective variable node reordering method to increase minimum span of stopping sets (MSSS) by using the special structure of degree two variable nodes of extended irregular repeat accumulate (eIRA) codes. The proposed two codes have also efficient encodable property since they are the class of eIRA codes. Furthermore we present construction method of irregular LR-LDPC which are known to have large MSSS. From the experimental results, we show that MSSS of the proposed codes are larger than the conventional ones.
SATO Yoshiyuki, HOSOYA Gou, HIRASAWA Shigeichi
IEICE technical report 108 ( 473 ) 421 - 426 2009.03
View Summary
In this paper, we propose a effective grouping method for the Group Shuffled BP decoding algorithm using the irregular LDPC codes according to the structure of a Tanner graph and the degree distribution of the codes which can accelerate the convergence speed and in result reduce the decoding delay. We show simulation results which indicate that the decoding performance of the proposed grouping method is better than those of the conventional grouping method.
Minimum span analysis for a combined graph ensemble of low-density parity-check codes
ISHIKAWA Yujin, SATO Yoshiyuki, HOSOYA Gou, HIRASAWA Shigeichi
IEICE technical report 108 ( 473 ) 435 - 440 2009.03
View Summary
The combination of the low-density parity-check (LDPC) code and the belif-propagation (BP) decoding algorithm exhibits a good decoding performance. Now error correction via LDPC code is applied for high speed wireless channel, wired channel and magnetic recording systems. To evaluate the correcting capability for burst erasure, analysis for the LDPC code ensemble has been employed. First, this analytical method evaluates the minimum span of the stopping set, and then, calculate the critical minimum span rate which expresses the rate of the correctable burst erasure for the code length. In this paper, we formulate more accurate critical minimum span rate by using minimum stopping set for the standard Tanner graph ensemble. We then apply it to evaluate the critical minimum span rate for the LR Tanner graph ensemble, which is a combined graph ensemble with a good correcting capability for burst erasure.
実験計画法における効果の推定の計算量削減に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
第31回情報理論とその応用シンポジウム予稿集 945 - 950 2008.10
Error Correcting Codes using plural LDPC codes and Interleaving for a Finite-State Markov Channel
KOBAYASHI Naoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 108 ( 202 ) 55 - 60 2008.09
View Summary
In this paper, we study interleaved codes with plural LDPC codes. Interleaved codes are one of efficient error correcting schemes for burst errors. Interleaver is utilized for utilized to disperse burst error. In contrast, interleaver is utilized for good estimation of the channel states for joint estimation-decoding in this study. We evaluate error correcting performance of the interleaved codes by computer simulation. Furthermore, we propose a simple analysis method for this interleaved codes based on density evolution method considered to the states of channels.
A-6-6 A Modification and Analysis of Bit-Flipping Decoding for Generalized LDPC Codes
Kanai Takahiro, Hosoya Gou, Yagi Hideki, Hirasawa Shigeichi
Proceedings of the Society Conference of IEICE 2008 116 - 116 2008.09
Near Duplicated Image Detection by Learning with Difference Vector to Original Image
SANO Toshiyuki, HOSOYA Gou, YAGI Hideki, HIRASAWA Shigeichi
IEICE technical report 107 ( 489 ) 1 - 6 2008.02
View Summary
In this paper, a new pre-trained detection scheme for near-duplicated images is proposed. By using the assumption that the difference vectors between the original image and its near-duplicated image form altered or non-altered spaces independent of the original images. With pre-trained results, we regard features with large values in the difference vectors as the affected features by alteration. To avoid mis-detection, the proposed method codenses similar features together. The proposed method is valid for the original images without pre-training. We show by simulation results that the precision of the proposed method is larger than that of the conventional ones.
A Note on the Weak Universal Joint Source-Channel Coding
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 107 ( 422 ) 1 - 6 2008.01
View Summary
In this paper, a weak universal joint source-channel coding is considered. Weak universal means that a source probabilistic structure and a channel probabilistic structure are parametric and the set of parameter and its prior probability are known, but the parameter is unknown. Then we show a condition for the class of sources and channels that the error probability averaged by prior probability converges to 0.
A Note on Multiuser Detection Algorithms for CDMA based on the Belief Propagation Algorithm
HORII Shunsuke, SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 107 ( 422 ) 7 - 12 2008.01
View Summary
Multiuser detection algorithms based on the belief propagation algorithm is presented. Previously, multiuser detection algorithm based on the belief propagation algorithm was proposed however, the algorithm is the approximation of the belief propagation due to the computational complexity. In this paper, belief propagation algorithm can be applied to the multiuser detection problems in the same way of the definition by converting the factor graph structure. The proposed algorithm is optimum when the factor graph, which form is determined by signature sequences, has no cycles.
A Bayes Prediction Algorithm for Regression models with Outliers
SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IPSJ SIG Notes 2007 ( 128 ) 13 - 16 2007.12
View Summary
Outliers are often included in statistical data. The statistics analysis result is influenced from outliers. Therefore, there are many researches for handling of outliers. Box modeled outliers using mixture distribution. There are many researches that aim parameter estimation or outlier detection about this model. In this paper, we treat prediction problem about this model. First, we present an optimal prediction method with reference to the Bayes criterion in this model. The computational complexity of this method grows exponentially. Next, we propose an approximation algorithm reducing the computational complexity using EM algorithm, and evaluate this algorithm through some simulations.
Image Retrieval Using Bit-Plane for JPEG2000 Coded Texture Image
NOGUCHI Tatsuya, HOSOYA Gou, YAGI Hideki, HIRASAWA Shigeichi
IPSJ SIG Notes 2007 ( 125 ) 39 - 43 2007.12
View Summary
We propose an image retrieval method that extracts features using probability and Euler vectors of wavelet coefficient's bit-plane. Each bit-plane is associated with probabilities represents the frequency of "1" occurrence, and bit-plane of wavelet coefficients can be extracted from JPEG2000 coded images. On the other hand, Euler vectors generated from a bit-plane, it has a great affinity for JPEG2000. In this paper, we propose the image retrieval method using bit-planes and the Euler vector of JPEG2000 coded texture image. We show by simulation result with using Vis Tex data set that retrieval precision of the proposed method is better than that of the conventional one.
Image Retrieval Using Bit-Plane for JPEG2000 Coded Texture Image
NOGUCHI Tatsuya, HOSOYA Gou, YAGI Hideki, HIRASAWA Shigeichi
IEICE technical report 107 ( 380 ) 39 - 43 2007.12
View Summary
We propose an image retrieval method that extracts features using probability and Euler vectors of wavelet coefficient's bit-plane. Each bit-plane is associated with probabilities represents the frequency of "1" occurrence, and bit-plane of wavelet coefficients can be extracted from JPEG2000 coded images. On the other hand, Euler vectors generated from a bit-plane, it has a great affinity for JPEG2000. In this paper, we propose the image retrieval method using bit-planes and the Euler vector of JPEG2000 coded texture image. We show by simulation result with using Vis Tex data set that retrieval precision of the proposed method is better than that of the conventional one.
A generalization of the parallel error correcting codes by allowing some random errors
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E90A ( 9 ) 1745 - 1753 2007.09
View Summary
This paper generalizes parallel error correcting codes proposed by Ahlswede et al. over a new type of multiple access channel called parallel error channel. The generalized parallel error correcting codes can handle with more errors compared with the original ones. We show construction methods of independent and non-independent parallel error correcting codes and decoding methods. We derive some bounds about the size of respective parallel error correcting codes. The obtained results imply a single parallel error correcting code can be constructed by two or more kinds of error correcting codes with distinct error correcting capabilities.
On the Condition of ε-Achievable Overflow Thresholds for the Parametric Compound Source
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report 107 ( 42 ) 37 - 41 2007.05
View Summary
In the previous result, we generalized the achievability of variable-length coding from two viewpoints. One is the definition of an overflow probability, and the other is the definition of an achievability. We defined the overflow probability as the probability of codeword length, not per symbol, is larger than η_n and we introduce the ε-achievability of variable-length codes that implies an existence of a code for the source under the condition that the overflow probability is smaller than or equal to ε. Then, we showed the condition of ε-achievability for some restricted sources given ε. In this paper, at first we define the ε-achievability for the parametric compound source. Then we show the sufficient condition for the ε-achievability.
On the Probability of Length Overflow of Lossless Codes
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
The IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences (Japanese edition) A 90 ( 4 ) 292 - 304 2007.04
モバイルサービス向け認証基盤の検討
梅澤克之, 坂崎尚生, 佐藤一夫, 松木彰, 曽我健二, 片山透, 清本晋作, 田中俊昭, 平澤茂一
電子情報通信学会, 論文誌(D), vol.J90-D, no.2, pp.596-599, 2007年2月. 2007
Development and evaluation of a certificate validation system in mobile environments
Katsuyuki Umezawa, Seiichi Susaki, Satoru Tezuka, Shigeichi Hirasawa
IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING 2 ( 1 ) 84 - 93 2007.01
View Summary
We have developed a public key certificate validation system considering the restrictions peculiar to the mobile environment, such as processing the speed and memory capacity of a cellular-phone terminal, and the network transmission speed. In this paper we derive a theoretical formula showing the performance of a validity check of the public key certificate of the conventional system and of the proposed system, and compare and examine a theoretical value in a mobile environment. Moreover, we evaluate the actual measurement that uses the server and cellular-phone terminal that we developed. We show that our proposed system based on the certificate validation server (CVS) system is better than the conventional system from the viewpoint of processing speed and transmission speed. (c) 2007 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
モバイル環境での証明書検証方式の評価
梅澤克之, 笈川光浩, 洲崎誠一, 手塚悟, 平澤茂一
電子情報通信学会, 論文誌(D), vol.J90-D, no.2, pp.384-398, 2007年2月. 2007
モバイル向け証明書検証システムの開発
梅澤克之, 笈川光浩, 洲崎誠一, 手塚悟, 平澤茂一
情報処理学会論文誌, J90-D, No.2, 2007年2月 2007
語頭条件を満たさないWord-valued sourceに対するLZ78符号の符号化性能について
石田崇, 松嶋敏泰, 平澤茂一
電子情報通信学会, 技術研究報告, vol. 106, no. 516, IT2006-52, pp. 13-18, 2007年1月. 2007
Shortened and Concatenated Collusion-secure Codes for Digital Fingerprinting
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE Technical Report, vol. 107,no.42, IT2007-6, pp.31-36, May 2007. 2007
Group Shuffled BP復号法における効果的なグループ分割法
佐藤 芳行, 細谷 剛, 八木 秀樹, 平澤 茂一
電子情報通信学会, 技術研究報告, vol.107, no.42, IT2007-3, pp.13-18, 2007年5月. 2007
An Adaptive Decoding Algorithm of LDPC Codes over the Binary Erasure Channel
Gou Hosoya, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. 2007 Hawaii and SITA Joint Conference on Information Theory, Hawaii, U.S.A., May 2007. 2007
Shortening Methods of Collusion-Secure Codes for Digital Fingerprinting
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. 2007 Hawaii and SITA Joint Conference on Information Theory, Hawaii, U.S.A., May 2007. 2007
Improved collusion-secure codes for digital fingerprinting based on finite geometries
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8 522 - + 2007
View Summary
Digital fingerprinting, a copyright protection technique for digital contents, is considered. Digital fingerprinting should deter collusion attacks, where several fingerprinted copies of the same content are mixed to disturb their fingerprints. In this paper, we consider the averaging attack, which has effect for multimedia fingerprinting. We propose new collusion-secure fingerprinting codes based on finite geometries (FGs) which increase the rate of conventional collusion-secure codes, while they guarantee to identify the same number of colluders. Due to the new FG-based fingerprinting codes, the system can deal with a larger number of users to distribute a digital content.
Categorization based on the ratio of word frequency in each categories
Makoto Suzuki, Shigeichi Herasawa
2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8 3791 - + 2007
View Summary
In the present paper, we consider the automatic text categorization as a series of information processing and propose a new classification technique called the Frequency Ratio Accumulation Method (FRAM). This is a simple technique that calculates the sum of ratios of word frequency in each category. However, in FRAM, feature terms can be used without limit. Therefore, we propose the use of the character N-gram and the word N-gram as feature terms using the above-described property of FRAM. Next, we evaluate the proposed technique through a number of experiments. In these experiments, we classify newspaper articles from Japanese CD-Mainichi 2002 and English Reuters-21578 using the Naive Bayes method (baseline method) and the proposed method. As a result, we show that the classification accuracy of the proposed method is far better than that of the baseline method. Specifically, the classification accuracy of the proposed method is 87.3% for Japanese CD-Mainichi 2002 and 86.1% for English Reuters-21578. Thus, the proposed method has very high performance. Although the proposed method is a simple technique, it provides a new perspective and has a high potential and is language-independent Thus, the proposed method can be expected to be developed further in the future.
色情報に対する人間の感性を考慮した類似画像検索
藤田 大地, 野口 達也, 石田 崇, 平澤茂一
2007年FIT(情報科学技術フォーラム)論文集,名古屋,2007年9月. 2007
回転方向の自己相関関数を用いた商標図形分類法
餅原 道元, 佐野 利行, 石田 崇, 平澤茂一
2007年FIT(情報科学技術フォーラム)論文集,名古屋,2007年9月. 2007
Student questionnaire analyses for class management by text mining both in Japanese and in Chinese
Shigeichi Hirasawa, Fu-Yih Shih, Wei-Tzen Yang
2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8 3777 - + 2007
View Summary
By combining statistical analyses and information retrieval techniques, an efficient way for knowledge discovery from questionnaires is discussed. Since usual questionnaires include questions answered by a fixed format and those by a free format, it is important to introduce the methods by both data mining and text mining. The answers by the fixed format are called "items", and those by the free format, simply "texts". In this paper, using an algorithm for processing answers with both the items and the texts and that for extracting important sentences from texts combined with statistical techniques, a method for analyzing the questionnaires is established. The method is applied to a case of improvements for the quality of education by which the student questionnaire is executed to a class, and we obtain useful knowledge which leads to faculty development.
Properties of a word-valued source with a non-prefix-free word set
Takashi Ishida, Masayuki Goto, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 12 ) 3710 - 3723 2006.12
View Summary
Recently, a word-valued source has been proposed as a new class of information source models. A word-valued source is regarded as a source with a probability distribution over a word set. Although a word-valued source is a nonstationary source in general, it has been proved that an entropy rate of the source exists and the Asymptotic Equipartition Property (AEP) holds when the word set of the source is prefix-free. However, when the word set is not prefix-free (non-prefix-free), only an upper bound on the entropy density rate for an i.i.d. word-valued source has been derived so far. In this paper, we newly derive a lower bound on the entropy density rate for an i.i.d. word-valued source with a finite non-prefix-free word set. Then some numerical examples are given in order to investigate the behavior of the bounds.
A Note on the overflow probability of lossless codes
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
29 ( 2 ) 799 - 802 2006.11
A Note on Transmission Schemes with Unequal Error Protection Codes and a Feedback Channel
KOBAYASHI Naoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
29 ( 2 ) 815 - 818 2006.11
A study of interactive source coding of correlated sources
YOSHIDA Takahiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
29 ( 1 ) 355 - 358 2006.11
Information Theoretic Consideration of Document Classification
GOTO Masayuki, HIRASAWA Shigeichi
29 ( 2 ) 621 - 624 2006.11
Fast algorithm for generating candidate codewords in reliability-based maximum likelihood decoding
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 10 ) 2676 - 2683 2006.10
View Summary
We consider the reliability-based heuristic search methods for maximum likelihood decoding, which generate test error patterns (or, equivalently, candidate codewords) according to their heuristic values. Some studies have proposed methods for reducing the space complexity of these algorithms, which is crucially large for long block codes at medium to low signal to noise ratios of the channel. In this paper, we propose a new method for reducing the time complexity of generating candidate codewords by storing some already generated candidate codewords. Simulation results show that the increase of memory size is small.
A modification method for constructing low-density parity-check codes for burst erasures
Gou Hosoya, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 10 ) 2501 - 2509 2006.10
View Summary
We study a modification method for constructing low-density parity-check (LDPC) codes for solid burst erasures. Our proposed modification method is based on a column permutation technique for a parity-check matrix of the original LDPC codes. It can change the burst erasure correction capabilities without degradation in the performance over random erasure channels. We show by simulation results that the performance of codes permuted by our method are better than that of the original codes, especially with two or more solid burst erasures.
A note on error correction schemes with a feedback channel
Naoto Kobayashi, Daiki Koizumi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 10 ) 2475 - 2480 2006.10
View Summary
We propose a new fixed-rate error correction system with a feedback channel. In our system, the receiver transmits a list of positions of unreliable information bits based on the log a-posteriori probability ratios by outputs of a soft-output decoder to the transmitter. This method is just like that of the reliability-based hybrid ARQ scheme. To dynamically select an appropriate interleaving function with feedback information is a key feature of our system. By computer simulations, we show that the performance of a system with a feedback channel is improved by dynamically selecting an appropriate interleaving function.
A note on construction of orthogonal arrays with unequal strength from error-correcting codes
Tomohiko Saito, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 5 ) 1307 - 1315 2006.05
View Summary
Orthogonal Arrays (OAs) have been playing important roles in the field of experimental design. It has been known that OAs are closely related to error-correcting codes. Therefore, many OAs can be constructed from error-correcting codes. But these OAs are suitable for only cases that equal interaction effects can be assumed, for example, all two-factor interaction effects. Since these cases are rare in experimental design, we cannot say that OAs from error-correcting codes are practical. In this paper. we define OAs with unequal strength. In terms of our terminology. OAs from error-correcting codes are OAs with equal strength. We show that OAs with unequal strength are closer to practical OAs than OAs with equal strength. And we clarify the relation between OAs with unequal strength and unequal error-correcting codes. Finally, we propose some construction methods of OAs with unequal strength from unequal error-correcting codes.
Transformation of a parity-check matrix for a message-passing algorithm over the BEC
Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E89A ( 5 ) 1299 - 1306 2006.05
View Summary
We propose transformation of a parity-check matrix of any low-density parity-check code. A code with transformed parity-check matrix is an equivalent of a code with the original parity-check matrix. For the binary erasure channel, performance of a message-passing algorithm with a transformed parity-check matrix is better than that with the original matrix.
2元線形ブロック符号を用いた周期的時変畳込み符号の構成法
小笠原尚徳, 小林学, 平澤茂一
電子情報通信学会, 論文誌(A) vol.J89-A ( no.2 ) 144 - 153 2006
ブロックターボ符号のインタリーバ構成法と最小距離
小林学, 松嶋敏泰, 平澤茂一
電子情報通信学会, 論文誌(A) vol.J89-A ( no.2 ) 129 - 143 2006
An Application of Coding Theory into Experimental Design
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Analyses of Student Questionnaires for Faculty Developments
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Classification and clustering methods for documents by probabilistic latent semantic indexing model
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Knowledge Discovery from Documents -- A Case Study of Improvements for Quality of Education --
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
The Current Situation and Future Development of Japanese Universities
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
An Application of Coding Theory into Experimental Design -- Construction Methods for Unequal Orthogonal Arrays ?-
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
A Modification Method for Constructing Low-Density Parity-Check Codes for Burst Erasures
Gou Hosoya, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE Technical Report IT2005-121 153 - 158 2006
On Factorial Effects Corresponding to Orthogonal Arrays with Unequal Strength
SAITO Tomohiko, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report vol.106 ( no.60 ) 53 - 58 2006
View Summary
Orthogonal Arrays (OAs) have been playing important roles in the field of experimental design. It is well known that OAs are closely related to error-correcting codes, and many good OAs are constructed from error-correcting codes. And factorial effects that can be estimated with OAs corresponding to error-correcting codes are clarified. On the other hand, in our past researches, we focused on the relation between OAs and unequal error protection codes (UEP codes). OAs corresponding to UEP codes are called OAs with unequal strength (UOAs), where OAs corresponding to error-correcting codes are called OAs with equal strength. We showed that UOAs are more practical than OAs with equal strength, and proposed some construction methods of UOAs from UEP codes. But, in the researches, we only showed some examples of factorial effects that can be estimated with UOAs, so we had never clarified these factorial effects as well as OAs with equal strength. In this paper, we clarify factorial effects that can be estimated with UOAs.
KOIZUMI Daiki, KOBAYASHI Naoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report vol.106 ( no.60 ) 23 - 28 2006
View Summary
Reliability Based Hybrid ARQ (RB-HARQ) is an ARQ scheme using the modified feedback. In the RB-HARQ, the receiver returns both NAK signal and unreliable bit indices if the received sequence is not acknowledged. Since the unreliable bit index is determined by the bitwise posterior probability, better approximation of that probability is crucial. Assuming the systematic codes, the proposed RB-HARQ scheme has two features for this approximation: (1) The sender retransmits newly encoded parity bits corresponding to the unreliable information bits; (2) The retransmitted parity bits as well as the initial received sequence are put all together to perform the message passing decoding.
A New Class of Traceability Codes for Digital Fingerprinting
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
IEICE Technical Report vol.106 ( no.60 ) 13 - 18 2006
New Traceability Codes against a Generalized Collusion Attack for Digital FingerPrinting
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Information Security Applications: 6th International Workshop, Wisa 2005, Jeju Island, Korea, August 22-24, 2005, Selected Papers (Lecture Notes in Computer Science), Springer-Verlag. 2006
New Traceability Codes against a Generalized Collusion Attack for Digital Fingerprinting
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceedings of 2006 International Workshop on Information Security Applications (WISA2006) , Jeju Island, Korea, Aug. 2006. 2006
Exponential error bounds and decoding complexity for block concatenated codes with tail biting trellis inner codes
Shigeichi Hirasawa, Masao Kasahara
Journal of Discrete Mathematical Sciences and Cryptography 9 ( 2 ) 307 - 320 2006
View Summary
Tail biting trellis codes and block concatenated codes are discussed from random coding arguments. An error exponent and decoding complexity for tail biting random trellis codes are shown. We then propose a block concatenated code constructed by a tail biting trellis inner code and derive an error exponent and decoding complexity for the proposed code. The results obtained by the proposed code show that we can attain a larger error exponent at all rates except for low rates with the same decoding complexity compared with the original concatenated code. © 2006 Taylor &
Francis Group, LLC.
H.264のIntra予測モードにおける計算量低減
野口達也, 金田海渡, 平澤茂一
2006年FIT(情報科学技術フォーラム)論文集,J-021,福岡,2006年9月. 2006
文脈関連度による検索質問拡張手法の改良
坂口 朋章, 平松 丈嗣, 平澤 茂一
2006年FIT(情報科学技術フォーラム)論文集, E-001, 福岡, 2006年9月. 2006
隣接ブロックの相関関係を考慮したH.264動画像可逆符号化
佐野利行, 作田豊, 平澤茂一
2006年FIT(情報科学技術フォーラム)論文集,J-023,福岡,2006年9月. 2006
文書モデルの統計的性質に関する一考察
後藤正幸, 石田崇, 平澤茂一
日本経営工学会, 平成18年度秋季研究大会, H02, 広島, 2006年10月. 2006
Performance of Low-Density Parity-Check Codes for Burst Erasure Channels
Gou Hosoya, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. of 2006 International Symposium on Information Theory and its Applications (ISTIA2006), Seoul, Korea, Oct. 2006. 2006
A generalization of the parallel error correcting codes
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
PROCEEDINGS OF 2006 IEEE INFORMATION THEORY WORKSHOP 229 - + 2006
View Summary
This paper generalizes parallel error correcting codes proposed by Alshwede et al. over a type of multiple access channels called a parallel channel. The generalized parallel error correcting codes can handle with more errors compared with the original ones. We show construction methods of independent and non-independent parallel error correcting code and decoding methods. We derive some bounds about the size of respective parallel error correcting code.
モバイル向け属性証明書検証サーバの開発
梅澤克之, 高橋礼, 内山宏樹, 坂崎尚生, 笈川光浩, 小林賢, 平澤茂一
コンピュータセキュリティシンポジウム予稿集,2006年10月. 2006
著作権侵害検出を目的とした類似文書発見手法
高島秀佳, 坂口朋章, 長尾壮史, 石田崇, 平澤茂一
経営情報学会, 2009年度秋季全国研究発表大会予稿集 58 - 61 2006
PLSIによる学生アンケートからの知識発見-日台アンケート分析-
長尾壮史, 坂口朋章, 石田崇, 平澤茂一
経営情報学会, 2008年度秋季全国研究発表大会予稿集 210 - 213 2006
台湾における学生アンケート分析の中間報告
蔡宣静, 坂口朋章, 酒井哲也, 黄永東, 石田崇, 平澤茂一
経営情報学会, 2007年度秋季全国研究発表大会予稿集 214 - 217 2006
統計モデルに基づく大学入試の理論的考察
後藤正幸, 石田崇, 平澤茂一
経営情報学会, 2006年度秋季全国研究発表大会予稿集 224 - 227 2006
相関に基づく共クラスタリングによる協調フィルタリング
高島秀佳, 石田崇, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.715-718, 函館,2006年11月. 715 - 718 2006
単語の特徴を考慮したPLSIによる文書クラスタリング
長尾壮史, 八木秀樹, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.617-620, 函館,2006年11月. 617 - 620 2006
階層的クラスタを用いた適合性フィードバック手法による文書検索
穴瀬裕之, 石田崇, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.711-714, 函館,2006年11月. 711 - 714 2006
カラー成分間の相関を利用したSPIHTアルゴリズムによる静止画像圧縮
金田海渡, 細谷剛, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.533-536, 函館,2006年11月. 533 - 536 2006
クラスタに基づく適合性フィードバックによる文書検索
湯木野高幸, 石田崇, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, 函館,2006年11月. 2006
単語の修正重みに基づく適合性フィードバックによる文書検索
平松丈嗣, 石田崇, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.703-706, 函館,2006年11月. 703 - 706 2006
信頼度更新を用いたLDPC符号のBit-Flipping復号法の改良
長谷川裕, 細谷剛, 八木秀樹, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.649-652, 函館,2006年11月. 649 - 652 2006
単語の共起を考慮に入れたナイーブベイズモデルによる文書分類
津田裕一, 八木秀樹, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.613-616, 函館,2006年11月. 613 - 616 2006
SPITHアルゴリズムにおけるデータ埋め込み法
作田豊, 細谷剛, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.529-532, 函館, 2006年11月. 529 - 532 2006
静止画像圧縮における適応型予測木の改良
小俣順, 細谷剛, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.537-540, 函館,2006年11月. 537 - 540 2006
On Correctable Burst-Erasure Lengths for LDPC Codes with Column Permuted Parity-Check Matrices
Gou Hosoya, Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. 29th Symposium on Information Theory and its Applications (STIA2006), pp.645-648, Hakodate, Nov. 2006. 2006
Decoding Performance of Linear Parallel Error Correcting Codes
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. 29th Symposium on Information Theory and its Applications (STIA2006), pp.189-192, Hakodate, Nov. 2006. 2006
モバイル向け属性証明書検証システムの開発
梅澤克之, 笈川光浩, 小林賢, 平澤茂一
第29回情報理論とその応用シンポジウム予稿集, pp.665-668, 函館, 2006年11月. 2006
HMM通信路に対するEM復号の復号誤り確率の評価法
小林 学, 八木 秀樹, 松嶋 敏泰, 平澤 茂一
第29回情報理論とその応用シンポジウム予稿集, 函館,2006年11月. 2006
判定基準LR+Thを用いたブロック符号の帰還誤り指数の改善
新家稔央, 松嶋敏泰, 平澤茂一
電子情報通信学会, 論文誌(A), vol.J89-A, no.12, pp.1168-1174, 2006年12月. 2006
An Application of Coding Theory into Experimental Design
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Analyses of Student Questionnaires for Faculty Developments
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Classification and clustering methods for documents by probabilistic latent semantic indexing model
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, R.O.C., March 2006. 2006
Knowledge Discovery from Documents -- A Case Study of Improvements for Quality of Education --
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
The Current Situation and Future Development of Japanese Universities
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
An Application of Coding Theory into Experimental Design -- Construction Methods for Unequal Orthogonal Arrays ?-
Shigeichi Hirasawa
The 2006 International Seminar of E-commerce Academic and Application Research, Tainan, Taiwan, R.O.C., March 1-2, 2006. 2006
Exponential Error Bounds and Decoding Complexity for Block Concatenated Codes with Tail Biting Trellis Inner Codes
Shigeichi Hirasawa, Masao Kasahara
The Journal of Discrete Mathematical Science and Cryptography vol.9 ( no.2 ) 307 - 320 2006
A Note on the Construction of Orthogonal Designs Using Error Correcting Codes
SAITO Tomohiko, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 2 ) 463 - 466 2005.12
Redundancy of Bayes Codes for Nonstationary Sources with Piecewise Constant Parameters
SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 2 ) 523 - 526 2005.12
On Error Exponents for Variable Size List decoder using the Viterbi Algorithm with Likelihood Ratio Testing
NIINOMI Toshihiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 2 ) 815 - 818 2005.12
Evaluation of certificate verification method in mobile environment
UMEZAWA Katsuyuki, OIKAWA Mitsuhiro, SUSAKI Seiichi, HIRASAWA Shigeichi
28 ( 2 ) 587 - 590 2005.11
A Note on Universal Coding Algorithm with the BWT
SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
28 ( 1 ) 315 - 318 2005.11
Weighted Bit-Flipping Decoding Algorithms of Low-Density Parity-Check Codes with Updating Reliabilities
SATO Tadashi, HOSOYA Gou, YAGI Hideki, HIRASAWA Shigeichi
28 ( 1 ) 9 - 12 2005.11
A Note on an Error-Correcting System with a Feedback Channel
KOBAYASHI Naoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
28 ( 1 ) 343 - 346 2005.11
A heuristic search method with the reduced list of test error patterns for maximum likelihood decoding
H Yagi, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E88A ( 10 ) 2721 - 2733 2005.10
View Summary
The reliability-based heuristic search methods for maximum likelihood decoding (MLD) generate test error patterns (or, equivalently, candidate codewords) according to their heuristic values. Test error patterns are stored in lists and its space complexity is crucially large for MLD of long block codes. Based on the decoding algorithms both of Battail and Fang and of its generalized version suggested by Valembois and Fossorier, we propose a new method for reducing the space complexity of the heuristic search methods for MLD including the well-known decoding algorithm of Han et al. If the heuristic function satisfies a certain condition, the proposed method guarantees to reduce the space complexity of both the Battail-Fang and Han et al. decoding algorithms. Simulation results show the high efficiency of the proposed method.
A Note on the Construction of Nonlinear Unequal Orthogonal Arrays from Error-Correcting Codes
SAITO Tomohiko, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE Technical Report 105 ( 84 ) 13 - 18 2005.05
View Summary
Orthogonal arrays have been used in the field of experimental design. Hedayat and Sloane showed the relation between orthogonal arrays and error-correcting codes[1]. And they proposed some construction methods of both linear and nonlinear orthogonal arrays from error-correcting codes. On the other hand, the paper[5] defined unequal orthogonal arrays as new class. It showed that unequal orthogonal arrays are more applicable to experimental design. Furthermore, it showed the relation between unequal orthogonal arrays and unequal error-correcting codes[3], and proposed the construction method of unequal orthogonal arrays from unequal error-correcting codes. But orthogonal arrays from this construction method are all linear. In this paper, we clarify the relation between nonlinear unequal orthogonal arrays and codes. And we propose one of construction methods of nonlinear unequal orthogonal arrays from error-correcting codes.
ナーススケジューリング問題に対する遺伝的アルゴリズムの改良
福井知博, 石田崇, 平澤茂一
第17回自律分散システムシンポジウム資料,横浜 295 - 298 2005
インターネットを用いた研究支援環境~電子会議システム~
野村亮, 石田崇, 中澤真, 鴻巣敏之, 松嶋敏泰, 平澤茂一
経営情報学会 2005年度春季全国研究発表大会予稿集 254 - 257 2005
インターネットを用いた研究支援環境~情報検索システム~
石田崇, 足立鉱史, 後藤正幸, 酒井哲也, 平澤茂一
経営情報学会 2005年度春季全国研究発表大会予稿集 302 - 305 2005
文書分類法とそのアンケート分析への応用
平澤茂一, 石田崇, 足立鉱史, 後藤正幸, 酒井哲也
経営情報学会 2005年度春季全国研究発表大会予稿集 54 - 57 2005
Analysis of student questionnaire in the lecture of computer science
Ishida Takashi, Goto Masayuki, Hirasawa Shigeichi
Computer & Education vol.18 152 - 157 2005
View Summary
Universities are expected to keep faculty developments. By using knowledge acquisition methods such as the statistical techniques or the information retrieval methods, we propose the efficient methods for analyzing the students answers for questionnaires. First, a model, which shows the relationships between a students characteristic and the degree of satisfaction or their score, is proposed. Next, a set of questionnaires is analyzed using the statistical technique or the information retrieval method, and finally the class model is verified.
文脈混合を考慮したPPM*アルゴリズム
穴瀬裕之, 芥子和宏, 石田崇, 平澤茂一
電子情報通信学会 技術研究報告 vol.105 ( no.191 ) 35 - 40 2005
トレリスの枝削除によるq元ターボ復号の計算量低減
長谷川裕, 細谷剛, 八木秀樹, 平澤茂一
電子情報通信学会 技術研究報告 vol.105 ( no.190 ) 9 - 14 2005
インターネットトラヒックのポアソン分布の密度パラメータが時間変動する時系列モデルを用いた解析に関する一考察
小泉大城, 松嶋敏泰, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 L-016 2005
教学支援システムに関する学生アンケートの分析
渡辺智幸, 後藤正幸, 石田崇, 酒井哲也, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 N-008 2005
ブロック単位でマルチ走査を行う静止画像圧縮
金田海渡, 芥子和宏, 石田崇, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 J-067 2005
情報の偏りを考慮した静止画像の可逆予測符号化法
小俣順, 森中亮, 石田崇, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 J-066 2005
DCTにおけるAC係数の相関を考慮した画像符号化
作田豊, 森中亮, 石田崇, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 J-065 2005
クラスタに基づいた適合性フィードバック手法
湯木野高幸, 松下大輔, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 D-044 2005
初期検索結果から抽出した単語を用いた擬似フィードバック手法
平松丈嗣, 松下大輔, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 D-043 2005
相互情報量に基づく特徴選択を用いた文書分類
津田裕一, 山岸英貴, 石田崇, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 D-029 2005
文書に特徴的な単語を考慮した検索結果のクラスタリング
長尾壮史, 山岸英貴, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 D-028 2005
欠損値推定による協調フィルタリング手法
高島秀佳, 山岸英貴, 平澤茂一
2005年FIT(情報科学技術フォーラム)論文集 A-008 2005
モバイル向け証明書検証サーバの開発
梅澤克之, 高橋礼, 内山宏樹, 坂崎尚生, 笈川光浩, 洲崎誠一, 平澤茂一
電子情報通信学会 技術研究報告 vol.105 ( no.311 ) 49 - 54 2005
LDPC符号の消失訂正と誤り訂正の関係
細谷剛, 松嶋敏泰, 平澤茂一
電子情報通信学会 技術研究報告 vol.105 ( no.311 ) 13 - 17 2005
再帰的畳み込み符号を利用したReliability Based Hybrid ARQについての研究
小林直人, 小泉大城, 松嶋敏泰, 平澤茂一
電子情報通信学会 技術研究報告 vol.105 ( no.311 ) 7 - 12 2005
アプリオリアルゴリズムを用いた指定精度を保証するマトリックスクラスタリング手法
小林 学, 松嶋 敏泰, 平澤 茂一
経営情報学会春季全国大会予稿集 vol.1 50 - 53 2005
モバイル向け証明書検証サーバの開発,
梅澤克之, 高橋礼, 内山宏樹, 坂崎尚生, 笈川光浩, 州崎誠一, 平澤茂一
電子情報通信学会, 技術研究報告 IT2005-50 2005
モバイル向け証明書検証サーバの開発と評価 モバイルセキュリティ基盤技術の研究開発I,
梅澤克之, 高橋礼, 内山宏樹, 坂崎尚生, 笈川光浩, 州崎誠一, 平澤茂一
コンピュータセキュリティシンポジウム2005予稿集 121 - 126 2005
譲渡可能で二重使用不可能な電子チケットシステム,
榎木康二, 八木秀樹, 梅澤克之, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 579 - 582 2005
有限幾何に基づくFingerprintingのための結託耐性符号,
八木秀樹, 松嶋敏泰, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 701 - 704 2005
JPEGにおけるハフマン符号化法の修正,
芥子和宏, 石田崇, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 849 - 852 2005
単一ビットプレーンごとのSPITHアルゴリズムを用いた静止画像圧縮,
森中亮, 石田崇, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 861 - 864 2005
単語の潜在的意味空間に基づく文書分類,
山岸英貴, 細谷剛, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 717 - 720 2005
適合文書から抽出した重要語に基づく文書検索,
松下大輔, 細谷剛, 平澤茂一
第28回情報理論とその応用シンポジウム予稿集 713 - 716 2005
A decoding algorithm of low-density parity-check codes using decisions of erasure correcting,
G. Hosoya, T. Matsushima, S. Hirasawa
第28回情報理論とその応用シンポジウム予稿集 5 - 8 2005
ビタビアルゴリズムを用いた可変サイズのリスト復号における誤り指数について
新家稔央, 松嶋敏泰, 平澤茂一
電子情報通信学会, 論文誌(A) vol.J88-A ( no.11 ) 1343 - 1351 2005
A note on a decoding algorithm of codes on graphs with small loops
Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceedings of the IEEE ITSOC Information Theory Workshop 2005 on Coding and Complexity, ITW2005 108 - 112 2005
View Summary
The best-known algorithm for the decoding of low-density parity-check (LDPC) codes is the sum-product algorithm (SPA). The SPA is a message-passing algorithm on a graphical model called a factor graph (FG). The performance of the SPA depends on a structure of loops in a FG. Pearl showed that loops in a graphical model could be erased by the clustering method. This method clusters plural nodes into a single node. In this paper, we show several examples about a decoding on a FG to which the clustering method is applied. And we propose an efficient decoding algorithm for it. © 2005 IEEE.
Bayes universal coding algorithm for side information context tree models
Toshiyasu Matsushima, Shigeich Hirasawa
IEEE International Symposium on Information Theory - Proceedings 2005 2345 - 2348 2005
View Summary
The problem of universal codes with side information is investigated from Bayes criterion. We propose side information context tree models which are an extension of context tree models to sources with side information. Assuming a special class of the prior distributions for side information context tree models, we propose an efficient algorithm of Bayes code for the models. The asymptotic code length of the Bayes codes with side information is also investigated.
Non-perfectly secure identity based decentralized key distribution system
YOSHIDA Takahiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 1 ) 327 - 330 2004.12
A Note on Decoding of Low Density Parity Check Codes with Small-Loop
KOBAYASHI Naoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 1 ) 267 - 270 2004.12
質問学習に直交計画を用いた場合の予測アルゴリズムに関する一考察
浮田 善文, 小泉大城, 松嶋 敏泰, 平澤 茂一
人工知能学会研究会資料 SIG-FPAI-A402-12 25 - 32 2004.11
An improved method of reliability-based maximum likelihood decoding algorithms using an order relation among binary vectors
H Yagi, M Kobayashi, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E87A ( 10 ) 2493 - 2502 2004.10
View Summary
Reliability-based maximum likelihood decoding (MLD) algorithms of linear block codes have been widely studied. These algorithms efficiently search the most likely codeword using the generator matrix whose most reliable and linearly independent k (dimension of the code) columns form the identity matrix. In this paper, conditions for omitting unnecessary metrics computation of candidate codewords are derived in reliability-based MLD algorithms. The proposed conditions utilize an order relation of binary vectors. A simple method for testing if the proposed conditions are satisfied is devised. The method for testing proposed conditions requires no real number operations and, consequently, the MLD algorithm employing this method reduces the number of real number operations, compared to known reliability-based MLD algorithms.
N-021 A Student Questionnaire Analysis System based on Natural Language Expressions
Sakai Tetsuya, Ishida Takashi, Goto Masayuki, Hirasawa Shigeichi
3 ( 4 ) 325 - 328 2004.08
Redundancy of Bayes Coding for Nonstationary Sources with Piecewise Constant Parameters
SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 104 ( 229 ) 23 - 28 2004.07
View Summary
In this paper we treat universal source coding when the parameters of the probabilistic model of source are known. Bayes code is one of the wellknowm universal codes. Bayes code has Bayes optimality in point of minimization of redundancy. Recently, many researches of Bayes code for nohstationary sources are done. Researches for sources with piecewise constant parameters are one of them. Each sections are visible to stationary sources. But these sources have abruptly changing parameters. And they are treated as nonstationary sources. But the Asymptotic character of mean redundancy for this source is not known. In this paper, we evaluate asymptotic mean redundancy of this source with the conditions of change number is not kown. And we show that Bayes code has universality with some condtions.
A ramp model for key distribution schemes
YOSHIDA Takahiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
Technical report of IEICE. ISEC 104 ( 53 ) 69 - 74 2004.05
View Summary
A key distribution scheme is a method to distribute off-line initial private pieces of information among a set of users, such that each group of a given size can compute a common key for secure conference. In this paper, we consider a ramp model for key distribution scheme. In the ramp model, the required resources can be reduced at the cost of a secerity degradation which depends on the size of users. We define a ramp model for key distribution scheme, show lower bounds on the size of the piece of a user's information and design a ramp model for key distribution scheme.
On the Channel Capacity of Universal Channel Coding
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
Technical report of IEICE. ISEC 103 ( 713 ) 7 - 11 2004.03
View Summary
In this paper, we investigate the channel capacity in the case that we do not know the probabilistic model of the channel. First we show the channel capcity in the universal case. To show our main theorem we introduce the decoding scheme that minimizes the probability of error with respect to Bayes criteria.
質問学習と逐次実験計画の関係に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
電子情報通信学会研究技術報告 AI2003-63 1 - 6 2004.01
A collaboration support system for environmental protection using networks between Japan and ROC
The Forum on Environmemtal Protection, Taipei, R.O.C. 2004
Knowledge discovery from documents--A case of improvements for quality of education--
A short course at Tamkang University, Taipei, R.O.C. 2004
A supprot system for research activity using the Internet
A short course at Tamkang University, Taipei, R.O.C. 2004
Modification methods for construction and performance analysis of low-density parity check-check codes over Markov-modulated channels
Proc. of ISITA 2006, Parma, Italy 206 - 611 2004
Efficient reliability-baced soft decision decoding algorithm over Markov modulated channel
Proc. of ISITA 2005, Parma, Italy 823 - 826 2004
Exponential error bounds for block concatenated codes with tail biting trellis inner codes
Proc. of ISITA 2004, Parma, Italy 156 - 161 2004
Classification methods for documents with both free and fixed formats
Proceding of The 2004 International Conference in Management Sciences and Decision Making, Taipei, R.O.C. 427 - 444 2004
"ゆう度比検定を用いた木符号の復号法について"
新家稔央, 松嶋敏泰, 平澤茂一
電子情報通信学会論文誌A Vol.J87-A, No.2, pp.224-233 2004
ドキュメントの特徴を考慮した非階層的クラスタリング
渡辺祐介, 細谷剛, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 61 - 64 2004
単語の潜在意味を考慮した文書分類手法
松井治樹, 石田崇, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 57 - 60 2004
複数のクエリベクトルを用いた適合性フィードバック手法
林下雄也, 八木秀樹, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 49 - 52 2004
ソート・マッチング法に基づく軟判定復号アルゴリズムの修正
贅田里詩, 細谷剛, 八木秀樹, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 567 - 570 2004
Word segmentation of the sequences emitted from a word-valued source
ISHIDA Takashi, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
27 ( 1 ) 135 - 138 2004
多機能ICカード向けPKI機能
梅澤克之, 州崎誠一, 手塚悟, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 415 - 418 2004
最頻文脈依存N-gramを考慮した文法生成法に基づくデータ圧縮法
大井昭久, 石田崇, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 831 - 834 2004
辞書番号を修正したLZW符号
吉田幸二, 石田崇, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 827 - 830 2004
A decoding method of low-dencity parity-check codes over binary symmetric channel
Gou Hosoya, Toshiyasu Matsushima, Shigeichi Hirasawa
第27回情報理論とその応用シンポジウム予稿集,岐阜 2004
A heuristic search algorithm with the reduced list of test error patterns for maximum likelihood decording
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
第27回情報理論とその応用シンポジウム予稿集,岐阜 2004
PLSIに基づく適合性フィードバック手法
足立鉱史, 石田崇, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 53 - 56 2004
多段階在庫の汎用性に関する情報理論的考察
後藤正幸, 石田崇, 平澤茂一
第27回情報理論とその応用シンポジウム予稿集,岐阜 131 - 134 2004
A collabolation support system for environmental protection using networks between Japan and ROC
Shigeichi Hirasawa
The forum on environmental protection, Taipei, ROC 2004
情報検索技術を用いた選択式・自由記述式の学生アンケート解析
石田崇, 足立鉱史, 後藤正幸, 酒井哲
経営情報学会2005年度秋季全国研究発表大会予稿集,名古屋 466 - 469 2004
インターネットを用いた研究支援環境
平澤茂一, 鴻巣敏之, 野村亮, 中澤真
経営情報学会2004年度秋季全国研究発表大会予稿集,名古屋 212 - 215 2004
Efficient reliability-based soft-decision decoding algorithm over Markov modulated channels
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proc. of the 2004 ISITA, Parma, Italy 823 - 829 2004
Modification methods for construction and performance analysis of low-density parity-check codes over Markov-modulated channels
Gou Hosoya, Hideki Yagi, Shigeichi Hirasawa
Proc. of ISITA 2004, Parma, Italy 206 - 211 2004
Exponential error bounds for block concatenated coeds with tail biting trellis inner code
Shigeichi Hirasawa, Masao Kasahara
Proc. of the 2004 ISITA, Parma, Italy 156 - 161 2004
An improved method of reliability-based maximum likelihood algorithm using an drder relation among binary vectors
Hideki Yagi, Manabu Kobayashi, Toshiyasu Matsushima, Shigeich Hirasawa
IEICE Trans. Fundamentals E87-A ( 10 ) 2493 - 2502 2004
2者間暗号通信における鍵交換の計算コスト・通信コスト削減
榎木康二, 贅田里詩, 梅澤克之, 平澤茂一
FIT(情報科学技術フォーラム)2004予稿集 M-090 281 - 282 2004
単語出現頻度の偏りを考慮した文書分類
山岸英貴, 松井治樹, 平澤茂一
FIT(情報科学技術フォーラム)2004予稿集 E-025 167 - 168 2004
ユーザにとって潜在的に重要な単語を用いた対話的文書検索
松下大輔, 足立鉱史, 平澤茂一
FIT(情報科学技術フォーラム)2004予稿集 E-010 131 - 132 2004
Tail-Biting畳み込み符号に対する準さい尤復号アルゴリズムの効率化
佐藤匡, 八木秀樹, 平澤茂一
電子情報通信学会技術報告 IT2004-25 2004
周波数変換後の信号の重要度を考慮した画像圧縮法
森中亮, 大井照久, 石田崇, 平澤茂一
電子情報通信学会技術報告 IT2004-12 2004
参照回数を考慮したLZW法によるデータ圧縮
芥子和宏, 吉田幸二, 石田崇, 平澤茂一
電子情報通信学会技術報告 IT2004-10 2004
Knowredge discovery from documents---A case of imorovements for quality of education
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, ROC 2004
A support system for research activity using the Internet
Shigeichi Hirasawa
A short course at Tamkang University, Taipei, ROC 2004
Classification methods for documents with both fixed and free formats
Shigeichi Hirasawa, Wesley W. Chu
Proc. of the 2004 International Conference on Management Sciences and Decision Making, Taipei, ROC 427 - 444 2004
A collaboration support system for environmental protection using networks between Japan and ROC
The Forum on Environmemtal Protection, Taipei, R.O.C. 2004
Knowledge discovery from documents--A case of improvements for quality of education--
A short course at Tamkang University, Taipei, R.O.C. 2004
A supprot system for research activity using the Internet
A short course at Tamkang University, Taipei, R.O.C. 2004
Modification methods for construction and performance analysis of low-density parity check-check codes over Markov-modulated channels
Proc. of ISITA 2006, Parma, Italy 206 - 611 2004
Efficient reliability-baced soft decision decoding algorithm over Markov modulated channel
Proc. of ISITA 2005, Parma, Italy 823 - 826 2004
Exponential error bounds for block concatenated codes with tail biting trellis inner codes
Proc. of ISITA 2004, Parma, Italy 156 - 161 2004
Classification methods for documents with both free and fixed formats
Proceding of The 2004 International Conference in Management Sciences and Decision Making, Taipei, R.O.C. 427 - 444 2004
Parallel Propagation Algorithms for Tailbiting Convolutional Codes
MATSUSHIMA Toshiyasu, MATSUSHIMA Tomoko K., HIRASAWA Shigeichi
26 ( 2 ) 445 - 448 2003.12
An efficient block turbo decoding algorithm over channels with memory
WAKASA Shinji, YAGI Hideki, KOBAYASHI Manabu, HIRASAWA Shigeichi
26 ( 1 ) 85 - 88 2003.12
Bayes Coding for Sources with Piecewise Constant Parameters
SUKO Tota, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
26 ( 1 ) 165 - 168 2003.12
Complexity reduction of the gazelle and snyders decoding algorithm for maximum likelihood decoding
H Yagi, M Kobayashi, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E86A ( 10 ) 2461 - 2472 2003.10
View Summary
Several reliability based code search algorithms for maximum likelihood decoding have been proposed. These algorithms search the most. likely codeword, using the most reliable information set. where the leftmost k (the dimension of code) columns of generator matrix are the most reliable and linearly independent. Especially, D. Gazelle and J. Snyders have proposed an efficient decoding algorithm and this algorithm requires small number of candidate codewords to find out the most likely codeword. In this paper, we propose new efficient methods for both generating candidate codewords and computing metrics of candidate codewords to obtain the most likely codeword at the decoder. The candidate codewords constructed by the proposed method are identical those in the decoding algorithm of Gazelle et al. Consequently, the proposed decoding algorithm reduces the time complexity in total; compared to the decoding algorithm of Gazelle et al. without the degradation in error performance.
A source model with probability distribution over word set and recurrence time theorem
M Goto, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E86A ( 10 ) 2517 - 2525 2003.10
View Summary
Nishiara and Morita defined an i.i.d. word-valued source which is defined as a pair of an i.i.d. source with a countable alphabet and a function which transforms each symbol into a word over finite alphabet. They showed the asymptotic equipartition property (AEP) of the i.i.d. word-valued source and discussed the relation with source coding algorithm based on a string parsing approach. However, their model is restricted in the i.i.d. case and any universal code for a class of word-valued sources isn't discussed. In this paper, we generalize the i.i.d. word-valued source to the ergodic word-valued source which is defined by an ergodic source with a countable alphabet and a function from each symbol to a word. We show existence of entropy rate of the ergodic word-valued source and its formula. Moreover, we show the recurrence time theorem for the ergodic word-valued source with a finite alphabet. This result clarifies that Ziv-Lempel code (ZL77 code) is universal for the ergodic word-valued source.
Prediction Algorithm for Decision Tree Model
SUKO Tota, NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Theoretical foundations of Computing 103 ( 246 ) 93 - 98 2003.07
View Summary
Conventionally, decision tree generation algorithm has been used when performing prediction using the decision tree model. It can be considered that these are the model selection algorithm in the basis to which data was given. And, It predicts using the model chosen by the basis to which data was given. Therefore, it was very difficult to perform theoretical evaluation to the rate of a prediction error. In this work, we shows the prediction algorithm which makes the rate of an average prediction error the minimum. First, we re-formulize a decision tree model as a parametric stochastic model. The optimal prediction algorithm based on Bayes decision theory is shown using the model. Furthermore, the algorithm which calculates a prediction distribution efficiently by restraining a model class is described.
Data Compression Algorithm Based on The Grammar Transform Using The Most Frequent N-gram
OHI Akihisa, ISHIDA Takashi, HIRASAWA Shigeichi
IEICE technical report. Information theory 103 ( 214 ) 13 - 18 2003.07
View Summary
Recently, the compression algorithms based on the grammar have been proposed. These algorithms first generate the grammar and then apply the arithmetic code. H.Nakamura et al. proposed an algorithm which registeres the most frequent digram as a rule. While, M.Kanda et al. have proposed an algorithm which makes the grammar the optimum by calculating the ideal code length of the arithmetic code. In this paper, we propose a new algorithm that we generates the grammar for the most frequent N-gram as well as the most frequent digram.
Performance of compression with the LZ78 codes for word-valued source
YOSHIDA Kouji, ISHIDA Takashi, HIRASAWA Shigeichi
IEICE technical report. Information theory 103 ( 214 ) 25 - 30 2003.07
View Summary
The Ziv-Lempel(LZ)codes which are typical universal codes have been proposed modifing algorithms from not only theoretical but practical use. Recently, a word-valued source has been by proposed as a new class of source models, and its entropy rate is analyzed. The LZ78 codes for the word-valued source can be applied with two kinds of methods (i) inclemental parsing for each word (ii) inclemental parsing for each symbol. The performance compression of the LZ78 codes between these two kinds of methods is an interesting theme. In this paper, we show that the upper bound of average redundancy achieves O(1/log n) and consider about validity of LZ78 codes.
A Note on Learning Boolean Functions by Using Orthogonal Design
UKITA Yoshifumi, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE transactions on fundamentals of electronics, communications and computer sciences 86 ( 4 ) 975 - 975 2003.04
An Approximation Algorithm of Bayes Coding to Reduce Memory Capacity
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers. A 86 ( 1 ) 46 - 59 2003.01
"データ圧縮に適した最頻digramに基づく逐次型文法変換法"
神田勝, 石田崇, 平澤茂一
第26回情報理論とその応用シンポジウム予稿集 pp.695-698 2003
"A Method for Reducing Space Complexity of Reliability based Heuristic Search Maximum Likelihood Decoding Algorithms" The 26th Symposium on Information Theory and Its Applications (SITA2003),
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
第26回情報理論とその応用シンポジウム予稿集 pp.185-188 2003
"記憶のある通信路に対する低密度パリティ検査符号の復号性能の解析"
細谷剛, 八木秀樹, 平澤茂一
第26回情報理論とその応用シンポジウム予稿集 pp.269-272 2003
"語頭条件を満たさないWord-Valued Sourceモデルに関する一考察"
石田崇, 松嶋敏泰, 平澤茂一
第26回情報理論とその応用シンポジウム予稿集 pp.77-80 2003
"係り受け木を用いた日本語文書の重要部分抽出"
伊藤潤, 酒井哲也, 平澤茂一
情報処理学会研究報告 自然言語処理 158-4, pp.19-24 2003
"バースト誤り通信路に対するターボ復号法"
若狭心司, 八木秀樹, 小林学, 平澤茂一
電子情報通信学会技術報告 IT2003-21, pp67-72 2003
"バースト誤り通信路に適した低密度パリティ検査符号の構成法"
細谷剛, 八木秀樹, 小林学, 平澤茂一
電子情報通信学会技術報告 IT2003-20, pp61-66 2003
"確率伝播型復号法に適した低密度パリティ検査符号-巡回型LDPC符号の短縮化法-”
贅田里詩, 細谷剛, 八木秀樹, 平澤茂一
電子情報通信学会技術報告 IT2003-32, pp51-56 2003
"信頼度情報に基づく置換生成行列を用いた最尤復号法の効率化-2元系列の順序関係を利用した計算量低減手法-"
八木秀樹, 小林学, 平澤茂一
電子情報通信学会技術報告 IT2003-6, pp23-28 2003
"語頭条件を満たさない単語集合をもつWord-Valued Sourceの性質について"
石田崇, 後藤正幸, 松嶋敏泰, 平澤茂一
電子情報通信学会技術報告 IT2003-5, pp23-28 2003
On universality of both Bayes codes and Ziv-Lempel codes for sources which emit data sequence by block unit
T Ishida, M Gotoh, S Hirasawa
ELECTRONICS AND COMMUNICATIONS IN JAPAN PART III-FUNDAMENTAL ELECTRONIC SCIENCE 86 ( 1 ) 58 - 69 2003
View Summary
Ziv-Lempel (ZL) codes and Bayes codes are typical universal codes. An improved algorithm of the ZL code is widely used in compression software. On the other hand, practical use of Bayes codes is difficult due to the large amount of computation needed. However, a realizable algorithm in terms of computation effort has been constructed for the FSMX model group [9]. In this paper, an information source generating a sequence by word units is assumed as a model that can represent the probabilistic structure of actual data such as text data. The asymptotic compression performance of both codes is analyzed and evaluated for the information source class (information source for the block unit) with a constant (fixed) word length. As a result, it is found that Bayes code cannot directly be universal as a coding algorithm for symbol units. On the other hand, the ZL78 code can be directly universal. Also, a configuration method for the Bayes coding method is given for an information source with a block unit. (C) 2002 Wiley Periodicals, Inc.
"授業モデルとその検証"
石田崇, 伊藤潤, 後藤正幸, 酒井哲也
経営情報学会2003年度秋季全国研究発表大会予稿集,函館 pp.226-229 2003
"ベイズ統計を用いた文書ファイルの自動分析手法,"
後藤正幸, 伊藤潤, 石田崇, 酒井哲也
経営情報学会2003年度秋季全国研究発表大会予稿集,函館 pp.28-31 2003
"PLSIを利用した文書からの知識発見"
伊藤潤, 石田崇, 後藤正幸, 酒井哲也
2003年FIT論文集,江別,信学会 vol.2,pp.83-84 2003
Representation method for a set of documents from the viewpoint of Bayesian statistics
M Goto, T Ishida, S Hirasawa
2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS 4637 - 4642 2003
View Summary
In this paper, we consider the Bayesian approach for representation of a set of documents. In the field of representation of a set of documents, many previous models, such as the latent semantic analysis (LSA), the probabilistic latent semantic analysis (PLSA), the Semantic Aggregate Model (SAM), the Bayesian Latent Semantic Analysis (BLSA), and so on, were proposed. In this paper, we formulate the Bayes optimal solutions for estimation of parameters and selection of the dimension of the hidden latent class in these models and analyze it's asymptotic properties.
Knowledge acquisition from documents with both fixed and free formats
S Hirasawa, WW Chu
2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS 4694 - 4699 2003
View Summary
Based on techniques in information retrieval, we discuss methods for knowledge acquisition from documents composed of both fixed and free for-mats. The documents with the fixed format imply items such as those of selecting one from sentences, words, symbols, or numbers. While the documents with the free format are the usual texts. In this paper, starting with an item-document matrix and a term-document matrix used for the representation of a document set, we propose a new method for knowledge acquisition taking simultaneously into account of both fixed and free formats. A method based on the Probabilistic Latent Semantic Indexing (PLSI) model is used for clustering a set of documents. The proposed method is applied to a document set given by questionnaires of students which is taken for the purpose of faculty development. We show the effectiveness of the proposed method compared to the conventional method.
"インターネットを用いた研究支援システム"
中澤真, 野村亮, 鴻巣敏之, 松嶋敏泰
私立大学情報教育協会平成15年度大学情報化全国大会予稿集,東京 pp.72-73 2003
"授業に関する選択式・記述式アンケートの分析"
平澤茂一, 石田崇, 伊藤潤, 石田崇
私立大学情報教育協会,平成15年度大学情報化全国大会予稿集,東京 pp.144-145 2003
選択式・記述式アンケートからの知識発見"
後藤正幸, 酒井哲也, 伊藤潤, 石田崇
PCカンファレンス予稿集,鹿児島 pp.43-46 2003
"情報検索技術を用いた効率的な授業アンケートの分析"
酒井哲也, 伊藤潤, 後藤正幸, 石田崇
経営情報学会2003年度春季全国研究発表大会予稿集,東京 pp.182-185 2003
"直交計画を用いたブール関数の学習に関する一考察"
浮田善文, 松嶋敏泰, 平澤茂一
電子情報通信学会論文誌A,信学論 Vol.J86-A, No.4, pp.482-490 2003
情報理論とその応用シリーズ3, 符号理論とその応用
平澤茂一
情報理論とその応用学会編,培風館 2003
ブール関数の逐次実験計画を用いた学習に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
電子情報通信学会研究技術報告 COMP2002-52 47 - 53 2002.11
HOSOYA Gou, YAGI Hideki, KOBAYASHI Manabu, HIRASAWA Shigeichi
IEICE technical report. Information theory 102 ( 198 ) 19 - 24 2002.07
View Summary
Sum-product algorithm, a well-known decoding algorithm of LDPC codes, is based on APP decoding to decode each received symbol. We usually assume that the parameters of channels are known, since a likelihood of channel noise is needed. When we consider applying it to practical channels, it is rarely that the parameters of channels are known. In this paper we propose a decoding algorithm combined with estimating unknown parameters of channels and we show that the bit error probability is as close as that of a decoding algorithm in which the parameters are known.
An Analysis of Dogancay-Kennedy Blind Equalization Algorithm using Channel Noise Statistics
YAGI Hideki, KOBAYASHI Manabu, HIRASAWA Shigeichi
101 ( 731 ) 83 - 88 2002.03
View Summary
One of the most prominent blind channel equalizer is Godard algorithm, which many researchers have been devoted to recently. Since the Godard algorithm has local minima, which leads to inferior equalization performance, recently, K. Dogancay and R. A. Kennedy have reformulated the Godard algorithm so that equalizer parameters converge globally and we can perform more robust way of equalization. In this paper, the equalization algorithm, which the equalization parameter is proved to control bias of estimator under the certain noise model, is proposed. Furthermore, an efficient adaptive implementation of the algorithm is proposed.
単語単位で系列を出力する情報源の性質について
石田崇, 後藤正幸, 松嶋敏泰, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.695-698 2002
テキスト自動分類におけるサブカテゴリの生成による分類精度の改善
加藤大樹, 鈴木誠, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.475-478 2002
文書自動分類における因子分析とクラスタリング手法を併用した単語選択法
斉藤剛, 鈴木誠, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.471-474 2002
観測値の類似度を考慮した強調フィルタリング
大島敬志, 鈴木誠, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.467-4702002 2002
隠れマルコフ型雑音通信路に対するターボ復号法について
大島英明, 小林学, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.179-182 2002
隠れマルコフ型雑音通信路における信頼度情報に基づく軟判定法
八木秀樹, 小林学, 平澤茂一
第25回情報理論とその応用シンポジウム予稿集 pp.11-14 2002
リンク構造にWebコミュニティの抽出手法
小森泉, 石田崇, 後藤正幸, 平澤茂一
2002年日本経営学会秋季研究発表大会 2002
統計的考察に基づくAHPの多角的整合性評価について
浦野真穂, 石田崇, 後藤正幸, 平澤茂一
2002年日本経営学会秋季研究発表大会 2002
文間の単語共起類似度を用いた重要文抽出手法
伊藤潤, 石田崇, 後藤正幸, 平澤茂一
FIT(情報科学技術フォーラム)2002予稿集 E-1 pp.83-84 2002
クラスタ生成に基づく電子メール文書の重要度ランク付け手法
小川恭幸, 石田崇, 後藤正幸, 平澤茂一
FIT(情報科学技術フォーラム)2002予稿集 E-3 pp.107-108 2002
計算論的情報源モデルと圧縮アルゴリズム
中澤真, 松嶋敏泰, 平澤茂一
電子情報通信学会情報理論研究会 pp.19-24 2002
情報システム演習の実績報告
吉田隆弘, 小林学, 平澤茂一
2002PCカンファレンス 2002
「インターネットを用いたゼミと研究指導」実用化報告
野村亮, 中澤真, 松嶋敏泰, 平澤茂一
2002PCカンファレンス 2002
最頻Digram統合に基づくデータ圧縮法
神田勝, 石田崇, 小林学, 平澤茂一
電子情報通信学会情報理論研究会 pp.31-36 2002
静止画像の無歪み圧縮に適した領域分割アルゴリズム
佐藤琢麻, 石田崇, 小林学, 平澤茂一
電子情報通信学会情報理論研究会 pp.37-42 2002
隠れマルコフ型雑音通信路における低密度パリティ検査符号に関する一考察
細谷剛, 八木秀樹, 小林学, 平澤茂一
電子情報通信学会情報理論研究会 pp.19-24 2002
計算論的学習と情報圧縮に関する一考察
中澤真, 松嶋敏泰, 平澤茂一
電子情報通信学会情報理論研究会 pp.37-42 2002
Distributed Cooperative Problem based on Multi-Terminal Systems
YOSHIDA Takahiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
24 ( 1 ) 367 - 370 2001.12
ベイズ決定理論による定式化のもとで直交計画を用いたブール関数の学習に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
電子情報通信学会研究技術報告 COMP2001-59 51 - 58 2001.11
A Formulation of Distributed Cooperative Problem based on Multi-Terminal Systems
YOSHIDA Takahiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 101 ( 177 ) 37 - 42 2001.07
View Summary
We consider the kind of distributed cooperative systems defined as follows. First, a number of agents make observations of an information of correlative states. Second, the agent receives the partial information from other agent and sends the part of observed information to other agent. Finally, the agent utilizes observed information to the full for group decision. In this paper, we apply a multi-terminal systems to a distributed cooperative system and formulate a distributed cooperative problem based on Bayes decision theory. In tha case that loss function is the logarithmic loss function, we shall derive the Bayes rule that minimizes Bayes risk, and show that Bayes risk of the Bayes rule is represented as two types of the mutual information.
置換生成行列を用いた尤度計算量を低減する最尤復号法
八木秀樹, 岡田知嗣, 小林学, 平澤茂一
信学技報 IT2000-42 2001
Reddy-Robinson復号法のリスト復号への拡張とその誤り訂正能力
岩下将人, 小林学, 平澤茂一
信学技報 IT2000-41 2001
ブロックターボ符号に対するインタリーバの構成法と最小距離
小林学, 松嶋敏泰, 平澤茂一
第24回情報理論とその応用シンポジウム P.95~P.98 2001
リスト復号アルゴリズムを用いた連接符号の復号法とその誤り訂正能力
竹内公二, 小林学, 平澤茂一
第24回情報理論とその応用シンポジウム P.639~P.642 2001
2元線形ブロック符号を用いた周期的時変畳込み符号の構成法
小笠原尚徳, 小林学, 平澤茂一
第24回情報理論とその応用シンポジウム P.553~P.556 2001
大規模データベースにおける定型項目と記述項目を含むデータからの相関ルール抽出法
熊谷貴禎, 小林学, 平澤茂一
第24回情報理論とその応用シンポジウム P.855~P.858 2001
不完全データを含む分割表におけるベイズ予測
本田真理, 後藤正幸, 平澤茂一
第24回情報理論とその応用シンポジウム P.851~P.854 2001
単語単位で系列を出力する情報源に対するLZ78符号のユニバーサル性について
石田崇, 後藤,正幸, 平澤茂一
第24回情報理論とその応用シンポジウム P.243~P.246 2001
変分ベイズ法に基づくARモデルによる混合予測について
倉持佳生, 後藤,正幸, 平澤茂一
第24回情報理論とその応用シンポジウム P.847~P.850 2001
行列で表現されたアンケートデータのクラスタリングについて
和田寛子, 後藤正幸, 平澤茂一
2001年日本経営工学会秋季発表大会 P.156~P.157 2001
「インターネットを用いた研究活動支援システム」システム構成と評価
野村亮, 中澤真, 鴻巣敏之, 松嶋敏泰, 平澤茂一
2001年日本経営学会秋季研究発表大会 2001
形式言語と圧縮に関する一考察
中澤真, 松嶋敏泰, 平澤茂一
電子情報通信学会情報理論研究会 P.19~P.24 2001
大規模データベースにおける相関ルール抽出のための属性値離散化法
大島敬志, 熊谷貴禎, 小林学, 平澤茂一
2001年電子情報通信学会情報・システムソサエティ大会 D-4-6 P.23 2001
複合語を考慮したテキスト自動分類
斎藤剛, 熊谷貴禎, 小林学, 平澤茂一
2001年電子情報通信学会情報・システムソサエティ大会 D-4-2 P.19 2001
テキスト自動分類におけるサブカテゴリの生成による分類精度の改善
加藤大樹, 熊谷貴禎, 小林学, 平澤茂一
2001年電子情報通信学会情報・システムソサエティ大会 D-4-1 P.18 2001
ベイズ推定に基づく不確実な知識を用いた推論に関する一考察
関崇博, 鈴木誠, 平澤茂一
2001年電子情報通信学会情報・システムソサエティ大会 D-8-6 P.67 2001
「インターネットを用いた研究活動支援システム」システム構成
平澤茂一, 松嶋敏泰, 鴻巣敏之, 酒井哲也, 中澤真, 李相協, 野村亮
2001PCカンファレンス 2001
ブロックターボ符号の生成行列を用いた一復号法
大島英明, 小笠原尚徳, 小林学, 平澤茂一
電子情報通信学会情報理論研究会 P.49-54 2001
ブロックターボ符号の生成行列と性能評価
小林学, 松嶋敏泰, 平澤茂一
電子情報通信学会情報理論研究会 P.1~P.6 2001
ブロック単位で系列を出力する情報源に対するベイズ符号とZiv-Lempel符号のユニバーサル性について
石田崇, 後藤正幸, 平澤茂一
電子情報通信学会論文誌 Vol. J84-A No. 9 2001
名著に学ぶ C.E. Shannon 『情報理論』
平澤茂一
日本経営工学会 経営システム P.83~P.86 2001
フーリエ変換を用いたブール関数の学習に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
電子情報通信学会研究技術報告 COMP2000-56 49 - 55 2000.11
データマイニングにおけるアルゴリズムの改良とその応用
武田健一, 李,相協, 平澤茂一
情報処理学会 第60回全国大会 2000
ブロック符号に対するトレリスのセクション構成について
八木秀樹, 岡田知嗣, 小林学, 平澤茂一
信学技報 IT99-97 2000
An Analysis on Error Rate of Statistical Model Selection Based on Bayes Decision Theory
GOTOH Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
Journal of Japan Industrial Management Association 50-6(475-484) ( 6 ) 474 - 485 2000
View Summary
Statistical model selection is one of the most important problems in statistics, and many works have left essential results. The conventional information criteria for model selection, such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), and minimum description length (MDL) were derived from different viewpoints. Many other model selection criteria have also been reported from various viewpoints. On the other hand, if we specify the model class and assume prior probabilities, then we can acquire Bayes optimal model selection for a finite number of samples based on Bayes decision theory. Furthermore, we can assume the various loss function adapting the purpose of model selection for practical cases. In this paper, we analyze the asymptotic properties of stasistical model selection based on Bayes decision theory. At first, we formulate Bayes optimal solution based on Bayes decision theory. In this formulation, we introduce the general loss function for practical problems. Moreover, we analyze the upper limits of the error rate of the model selection.
木符号におけるリスト復号法を用いた判定帰還方式について
新家稔央, 松嶋敏泰, 平澤茂一
信学会 論文誌A J83-A-1(67-82) 2000
不確実な知識を用いた推論のモデル化と推論法について
鈴木誠, 松嶋敏泰, 平澤茂一
情報処理学会 論文誌 14-1(1-11) 2000
Cut-And-Choose法を用いないShhnorr認証を応用した電子現金について
村上雅之, 小林学, 中澤真, 平澤茂一
日本経営工学会 秋季発表大会 (234-235) 2000
属性間の関連を考慮した帰納推論に関する一考察
岩本佳久, 後藤正幸, 平澤茂一
日本経営工学会 秋季発表大会 (176-177) 2000
不確実性を含む演繹推論に関する一考察
鈴木誠, 松嶋敏泰, 平澤茂一
第23回情報理論とその応用シンポジウム (427-430) 2000
置換生成行列を用いた線形ブロック符号に対する最尤復号法
岡田知嗣, 小林学, 平澤茂一
第23回情報理論とその応用シンポジウム (13-16) 2000
不確実な知識を用いた推論システムの構築とその評価
鈴木佐俊, 鈴木誠, 平澤茂一
情報処理学会 第61回全国大会 (2.13-14) 2000
リスト復号アルゴリズムを用いた軟判定復号法について
竹内公二, 岩下将人, 小林学, 平澤茂一
信学技報 IT2000-21 2000
ブロック符号の構造を用いた畳込み符号に関する一考察
小笠原尚徳, 岡田知嗣, 小林学, 平澤茂一
信学技報 IT2000-20 2000
観測雑音を考慮したARモデルによるベイズ最適な予測法
倉持佳生, 後藤正幸, 平澤茂一
信学技報 IT2000-17 2000
パラメータが変動するFSMX情報源に対するベイズ符号化に関する一考察
鎌須賀敦之, 後藤正幸, 平澤茂一
信学技報 IT2000-16 2000
不完全データからのベイズ推定に関する一考察
本田真理, 後藤正幸, 平澤茂一
信学技報 IT20000-15 2000
大規模データベースにおける多値属性を考慮した相関ルール抽出法
熊谷貴禎, 李相協, 平澤茂一
2000年 人工知能学会 全国大会 18-04(388-391) 2000
On Analysis of noiseless decision feedback scheme using fixed size list decoder for tree codes
T Niinomi, T Matsushima, S Hirasawa
2000 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, PROCEEDINGS 231 - 231 2000
View Summary
In this paper, the generalized proof are shown for the coding theorem of [1]. Consequently, the further discussion is obtained.
A Model for System Evaluation based on Information Theory
Shigeichi Hirasawa, Hiroshige
2000 International Symposium on Management Science (41-46) 2000
遺伝的アルゴリズムとk-means法を用いたクラスタリングの効率化
李相協, 松嶋敏泰, 平澤茂一
日本経営工学会 春季発表大会 (120-121) 2000
ベイズ決定理論によるヒストグラムを利用した確率分布の推定
和田寛子, 後藤正幸, 平澤茂一
日本経営工学会 春季発表大会 (98-99) 2000
A Study on Difference of Codelengths between Codes based on MDL Principle and Bayes Codes for Given Prior Distributions
Masayuki Goto, Toshiyasu Matsushima, Shigeichi Hirasawa
Electronics and Communications in Japan, Part 3 84,4(30-40) 2000
損失関数を考慮した拡張事後密度の漸近正規性
後藤正幸, 松嶋敏泰, 平澤茂一
信学会 論文誌A J83-A(639-650) 2000
質問からのブール関数の学習における学習戦略を求めるアルゴリズム
浮田 善文, 松嶋 敏泰, 平澤 茂一
第22回情報理論とその応用シンポジウム予稿集 845 - 848 1999.12
Almost sure and mean convergence of extended stochastic complexity
M Gotoh, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E82A ( 10 ) 2129 - 2137 1999.10
View Summary
We analyse the extended stochastic complexity (ESC) which has been proposed by K. Yamanishi. The ESC can be applied to learning algorithms for on-line prediction and batch-learning settings. Yamanishi derived the upper bound of ESC satisfying uniformly for all data sequences and that of the asymptotic expectation of ESC. However, Yamanishi concentrates mainly on the worst case performance and the lower bound has not been derived. In this paper, we show some interesting properties of ESC which are similar to Bayesian statistics: the Bayes rule and the asymptotic normality. We then derive the asymptotic formula of ESC in the meaning of almost sure and mean convergence within an error of o(1) using these properties.
Compression and decoding of an inverted file by using syndrome
FUTAGAMI Tsuneji, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 99 ( 235 ) 55 - 60 1999.07
View Summary
In document retrieval system by keywords, an inverted file is often used. We propose a new method for compressing and decoding it. In the proposed method, it is compressed and decoded at two stages, and syndromes are used in compression. We represent an inverted file by a simple mathematical model and evaluate the system by two quantities. One is size of secondary memory where an inverted file and a decode tree are stored. Another is time complexity for decoding it. The VG bound is used to derive the two quantities. The numerical calculation shows trade-off relation between the two quantities. The time complexity decreases strikingly compared with increase in memory which shows the usefulness of the proposed method.
A Note on the Asymptotic Normality of Posterior Probability Densities
MARUYAMA Hideaki, GOTOH Masayuki, HIRASAWA Shigeichi
IEICE technical report. Information theory 99 ( 187 ) 79 - 84 1999.07
View Summary
Posterior distributions conditioned by a data sequence converge to normal distributions under proper assumptions. This characteristic is called the asymptotic normality. In many previout works, convergence in probability or almost sure convergence, etc. have been considered. In this paper, we derive the asymptotic normality which holds uniformly for data sequences, without assuming the true distribution. On the other hand, the SC, an extension of the MDL criterion, is defined as a code length of the Bayes code, or the maximum likelihood code. The asymptotic formula of the SC was discussed in the meaning of uniform convergence. This is essential in a notion of the universal modeling. We show the asymptotic normality in the meaning of uniform convergence and derive the asymptotic formula of the SC which holds uniformly for data sequences directly from the uniform asymptotic normality.
Statistical model selection based on Bayes decision theory and its application to change detection problem
M Gotoh, S Hirasawa
INTERNATIONAL JOURNAL OF PRODUCTION ECONOMICS 60-1 629 - 638 1999.04
View Summary
The statistical model selection problems are discussed, and the criterion based on the Bayes decision theory for the detection of the true model is derived. We propose a new Bayesian model selection scheme, which can detect the change points of the parameter of the information source and also estimate each unknown parameter. In this paper, we formulate the Bayes optimal solution of the change detection problem, and analyze its property of the consistency. (C) 1999 Elsevier Science B.V. All rights reserved.
On the Complete Weight Enumerators of Doubly-Extended Reed-Solomon Codes
第22回情報理論とその応用シンポジウム (661-664) 1999
統計的推定の立場から見たZiv-Lempel符号に関する一考察
第22回情報理論とその応用シンポジウム (535-538) 1999
単語単位で系列を出力する情報源
第22回情報理論とその応用シンポジウム (359-362) 1999
信頼度情報に基づく置換生成行列を用いた最尤復号法の効率化について
第22回情報理論とその応用シンポジウム (323-326) 1999
On the Deductive Reasoning Model and Method for Uncertainty
IEEE Proc of ICTA'99 (161-164) 1999
マルチエージェントにおける情報交換ルールの自動獲得に関する一考察
情報処理学会 第59回全国大会 4J-2(2-93) 1999
EMアルゴリズムによるパラメータ推定に関する一考察
情報処理学会 第59回全国大会 2J-4(2-57) 1999
単語単位で出現する系列に対するベイズ符号について
信学技報 IT99-27 1999
フラグパターンを用いた正整数の符号化について
信学技報 IT99-23 1999
テストパターンを用いた消失・誤り訂正アルゴリズムによる軟判定復号法
信学技報 IT99-17 1999
双対符号の符号語を用いた線形ブロック符号の最尤復号法
信学技報 IT99-16 1999
事前分布が異なる場合のMDL原理に基づく符号とベイズ符号の符号長に関する解析
信学会 論文誌A J82-A(698-708) 1999
マルコフ情報源に対し誤り伝播を抑えるVF符号に関する一考察
信学会 論文誌A J82-A(736-741) 1999
BCH限界を超える復号アルゴリズムを用いた2元BCH符号の軟判定復号法
信学会 論文誌A J82-A(539-549) 1999
A Note on Decision Theoretic Formulation for Learning from Queries
Y. Ukita, T. Matsushima, S. Hirasawa
Transactions on Information Processing Society of Japan 39 ( 11 ) 2937 - 2948 1998.11 [Refereed]
View Summary
There are two main learning paradigms in machine learning.One is leaning from examples and the other is learning from queries. It is necessary in learning from queries to use good query strategy because the probability that the true hypothesis is learned depends on it. However, from the computational learning theory it is important to judge whether it can learn or cannot, and it cannot be decided which query strategy is good. In this paper, we formulate learning from queries by the decision theory and propose a method of evaluating query strategy, where under the conditon that one of learning criteria (success principle, efficiency principle) is restricted, the other is minimized. Furthermore, we p ropose a new lower bound required for the branch-and-bound algorithm which gets efficiently the optimal query strategy. This paper is an effective study from the viewpoint of guaranteeing that the computational work for an oracle is made the smallest in learning.
Gaps between Theory and Applications
HIRASAWA Shigeichi
The Journal of the Institute of Electronics,Information and Communication Engineers 81 ( 10 ) 1011 - 1014 1998.10
A Note on Learning of Probabilistic Hypotheses from Membership Queries
Y. Ukita, T. Matsushima, S. Hirasawa
Proceedings of International Symposium on Information Theory and Its Applications 2 604 - 607 1998.10 [Refereed]
A Partially Decodable Code based on LZ78 Code
TSUSHIMA Katsuya, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report. Information theory 98 ( 211 ) 1 - 6 1998.07
View Summary
LZ78 code is one of the most effective universal code which leads to many improved coding methods. However, even if we want to decode only the partial sequence then we must decode all of the sequence from the initial. In order to partially decode, the improved method is needed. For the LZ77 code, K.Iwata et al. proposed the improved partially decodable code. In this paper, we propose the new partially decodable code based on LZ78 code.
A Study on Lossy Compression Based on String Macthing
MAEDA Koji, GOTO Masayuki, HIRASAWA Shigeichi
IEICE technical report. Information theory 98 ( 211 ) 7 - 12 1998.07
View Summary
Y.Steinberg and M.Gutman proposed the lossy compression algorithm based on the string matching algorithm used in LZ77 code. Furthermore, Obata et.al proposed an improved algorithm which is efficient for finite i.i.d. sequence. This algorithm is a method which generates a new codeword nearby selected sequences. But this algorithm is not effective for the sources with finite memory. Then, we propose an efficient algorithm for Markov Source and show the characteristics of the algorithm from the simulation experiments.
Parallel Architecture for Generalized LFSR in LSI Built-In Self Testing
MATSUSHIMA Tomoko K., MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE Trans. Fundamentals 81 ( 6 ) 1252 - 1261 1998.06
View Summary
This paper presents a new architecture for multiple-input signature analyzers. The proposed signature analyzer with Hδ inputs is designed by parallelizing a GLFSR(δ, m), where δ is the number of input signals and m is the number of stages in the feedback shift register. The GLFSR, developed by Pradhan and Gupta, is a general framework for representing LFSR-based signature analyzers. The parallelization technique described in this paper can be applied to any kind of GLFSR signature analyzer, e. g., SISRs, MISRs, multiple MISRs and MLFSRs. It is shown that a proposed signature analyzer with Hδ inputs requires less complex hardware than either single GLFSR(Hδ, m)s or a parallel construction of the H original GLFSR(δ, m)s. It is also shown that the proposed signature analyzer, while requiring simpler hardware, has comparable aliasing probability with analyzers using conventional GLFSRs for some CUT error models of the same test response length and test time. The proposed technique would be practical for testing CUTs with a large number of output sequences, since the test circuit occupies a smaller area on the LSI chip than the conventional multiple-input signature analyzers of comparable aliasing probability.
A Study of the Decision of Control Parameters for Adaptive Automatic -R epeat - reQuest Strategy
TSUKAMOTO Ken-ichiro, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers B-(0xF9C1) 81 ( 6 ) 391 - 400 1998.06
On performance of prediction using side information
MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 98 ( 54 ) 7 - 12 1998.05
View Summary
In this paper, prediction using side information is studied. This problem has been investigated as predictive inference in the field of Bayesian theory and as on-line learning in the field of learning theory. The prediction problem under consideration is to predict-log p(y mid x)using given side information x.It is regarded as universal coding given side information. The Bayes risk of the Bayes predictive inference and the maxmini risk correspond to the mutual information of a certain multi-terminal channel and its capacity, respectively. In a certain prediction model, I.e., a multi-terminal channel, the Bayes optimum prediction is derived. Moreover, the asymptotic forms of Bayes risk and maximin risk are shown. This maximin risk is regarded as a difficulty measure of predictive inference and learning.
On Decoding Methods beyond the BCH Bound and their Applications to Soft-Decision Decoding
KOBAYASHI Manabu, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers. A 81 ( 4 ) 751 - 762 1998.04
学習期間と制御期間に分割された強化学習問題における最適アルゴリズムの提案
前田 康成, 浮田 善文, 松嶋 敏泰, 平澤 茂一
情報処理学会論文誌 39 ( 4 ) 1116 - 1126 1998.04 [Refereed]
矛盾を含む知識の取り扱いについての一考察
斉藤 幹也, 浮田 善文, 松嶋 敏泰, 平澤 茂一
人工知能学会誌 13 ( 2 ) 252 - 262 1998.03 [Refereed]
On Error Probability of Model Selection based on Bayes Decision Theory
GOTOH Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 97 ( 511 ) 43 - 48 1998.01
View Summary
In this paper, we shall discuss on the statistical model selection problems. The purpose of AIC is different from those of BIC and MDL, and AIC has not consistency although BIC and MDL have. Here a question arises. Does AIC have not consistency for the purpose? On the other hand, if we specify the model class, then we can formulate the Bayes optimal model selection for finite samples based on Bayes decision theory. In this paper, we show that model selection based on Bayes decision theory has consistency independent of whether the purpose is prediction of future observation or detection of a true model.
電子情報通信と数学
(社)電子情報通信学会/コロナ社 1998
On the minimum distance of concatenated codes and decoding method up to the true minimum distance
T Kohnosu, T Nishijima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E80A ( 11 ) 2111 - 2116 1997.11
View Summary
Concatenated codes have many remarkable properties from both the theoretical and practical viewpoints. The minimum distance of a concatenated code is at least the product of the minimum distances of an outer code and an inner code. In this paper, we shall examine some cases that the minimum distance of concatenated codes is beyond the lower bound and get the tighter bound or the true minimum distance of concatenated codes by using the complete weight enumerator of the outer code and the Hamming weight enumerator of the inner code. Furthermore we propose a new decoding method based on Reddy-Robinson algorithm by using the decoding method beyond the BCH bound.
A Learning with Membership Queries to Minimize Prediction Error
Y. Ukita, T. Matsushima, S. Hirasawa
IEEE International Conference on Systems, Man, and Cybernetics 5 4412 - 4417 1997.10 [Refereed]
A Note on Trellis Coding for Data Compression
SUENAGA Takashi, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 97 ( 208 ) 55 - 60 1997.07
View Summary
In trellis coding for data compression, it is difficult to search all codes, and a code is selected by the local search method. In the meantime, the property of the optimal code is shown in Rate-Distortion Theory. In this paper, we propose an algorithm which selects a good code by using the property to avoid selecting a local optimal code as much as possible. To evaluate the algorithm using the property, first we investigate the property of the optimal code for small constraint length by the simulation. Next we compare the result by the algorithm using the property for that by the algorithm not using the property by the simulation.
On Statistical Model Selection based on Bayes Decision Theory
GOTOH Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 97 ( 180 ) 37 - 42 1997.07
View Summary
In this paper, we analyze the asymptotic properties of the stasistical model selection based on Bayes decision theory. At first, we shall formulate the change detection problem based on Bayes decision theory. In this formulation, we introduce the general loss function for the practical problems. Moreover, we analyze the upper bound of the error rate of the model selection.
Based on Bayesian Statistics the Computational Learning Model and its Learnability
NAKAZAWA Makoto, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Theoretical foundations of Computing 97 ( 157 ) 71 - 78 1997.07
View Summary
In the PAC learning model and the U-learning model, the learning algorithm runs independent of the prior distribution. On the other hand, the Bayes algorithm works utilized the prior distribution, and it is Bayes optimal. Haussler have analyzed the sample complexity on the Bayes algorithm. But the algorithm usually requires a large amount of the time complexity. In this paper, we discuss on the complexity of the Bayes algorithm focusing on not only the sample complexity but also the time complexity. In this new learning model, the complexity depends on both the concept class and the family of priors. On the basis of both complexities, we propose a new concept called Bayesian learnability. Moreover we show the learnability of typical concept classes, and the properties of the Bayesian learnability.
An improvement of soft-decision maximum-likelihood decoding algorithm using hard-decision bounded-distance decoding
T Kaneko, T Nishijima, S Hirasawa
IEEE TRANSACTIONS ON INFORMATION THEORY 43 ( 4 ) 1314 - 1319 1997.07
Rapid communication, short report, research note, etc. (scientific journal)
View Summary
A new soft-decision maximum-likelihood decoding algorithm is proposed, which generates a set of candidate codewords using hard-decision bounded-distance decoding. BS improving the generating method of input vectors for the bounded-distance decoding due to Kaneko et al., decoding time complexity is reduced without degradation of performance, The space complexity is dependent on the hounded-distance decoding.
A Formulation and Optimization of Data Retrieval Upon the Statistical Decision Theory
KAJI Mio, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
11 225 - 228 1997.06
A Survey on Channel Coding
HIRASAWA Shigeichi
Proceedings of the IEICE General Conference 1997 560 - 561 1997.03
On Clarke and Barron's Asymptotics of Bayes Codes
GOTOH Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 96 ( 494 ) 73 - 78 1997.01
View Summary
The Bayes code is the Bayes optimal solution based on Bayes decision theory. This is the method which uses mixture of all models in model class for coding function. B.S.Clarke and A.R.Barron analyzed asymptotic mean codelength of the Bayes code and showed that Jaffreys' prior is the asymptotically least favorable under entropy risk for i.i.d sources. This paper consists of two parts. At first, we show the asymptotic codelengths of the Bayes codes for individual sequences for the parametric model class. The main condition required here for the parameter space is that the posterior probability of parameter has asymptotic normality. Generally speaking, the asymptotic notmality of posterior distribution distribution holds for other than i.i.d. sources. Secondly, we shall prove that Clarke and Barron's asymptotics of the Bayes code is satisfied for more general model classes than i.i.d. sources. The main conditions required here is the Central Limit Theorem for the maximum likelihood estimates. Since the target of source coding is not usually i.i.d. source, the extension is this paper is effective.
String Matching Algorithmに基づく有歪み圧縮について
電子情報通信学会情報理論研究会 IT69;60 1997
BCH限界を越える復号アルゴリズムを用いたChase復号法の計算量低減
第20回情報理論とその応用シンポジウム予稿集 pp.857-860 1997
混合分布の近似とその性能について
第20回情報理論とその応用シンポジウム予稿集 pp.693-696 1997
木構造型モデル族のモデル選択法に関する一考察
第20回情報理論とその応用シンポジウム予稿集 pp.689-692 1997
不確実性を含むデータの統号に関する一考察
第20回情報理論とその応用シンポジウム予稿集 pp.241-244 1997
木構造のモデル族の学習・予測アルゴリズムに関する一考察
電子情報通信学会 信学技法 IT97-36(pp.73-78) 1997
不均一誤り訂正符合の復号法に関する一考察
電子情報通信学会 信学技法 IT97-28(pp.25-30) 1997
BCH限界を越える誤り訂正復号法について
電子情報通信学会 信学技法 IT97-20(pp.31-36) 1997
ハッシュ技法によるデータ探索の数学的モデル化及び探索効率の漸近的評価
電子情報通信学会 信学技法 IT97-14(pp.79-84) 1997
属性のクラスタを用いた概念学習の効率化
経営情報学会 1997
属性値の類似度を用いた概念学習の効率化
日本経営工学会 春季発表会 1997
Berlekamp-Massey アルゴリズムを用いたBCH限界を越える復号法の計算量について
電子情報通信学会論文誌 J80-A;9(pp.1554-1558) 1997
On the minimum distance of binany concatenated codes
電子情報通信学会論文誌 E-80A;5 (pp.922-923) 1997
未知パラメータを含むマルコフ決定過程における学習アルゴリズムの提案
前田 康成, 浮田 善文, 松嶋 敏泰, 平澤 茂一
第19回情報理論とその応用シンポジウム予稿集 597 - 600 1996.12
Notes on Decoding Method for Superimposed Codes and Its Performance
SATAKE Kenji, KASAHARA Masao, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers. A 79 ( 11 ) 1931 - 1934 1996.11
A Note on Concept Learning with Membership Queries
Y. Ukita, T. Matsushima, S. Hirasawa
Proceedings International Symposium on Information Theory and Its Applications 308 - 311 1996.09 [Refereed]
Parallel encoder and decoder architecture for cyclic codes
TK Matsushima, T Matsushima, S Hirasawa
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E79A ( 9 ) 1313 - 1323 1996.09
View Summary
Recently, the high-speed data transmission techniques that have been developed for communication systems have in turn necessitated the implementation of high-speed error correction circuits. Parallel processing has been found to be an effective method of speeding up operations, since the maximum achievable clock frequency is generally bounded by the physical constraints of the circuit. This paper presents a parallel encoder and decoder architecture which can be applied to both binary and nonbinary cyclic codes. The architecture allows H symbols to be processed in parallel, where H is an arbitrary integer, although its hardware complexity is not proportional to the number of parallel symbols H. As an example, we investigate hardware complexity for a Reed-Solomon code and a binary BCH code. It is shown that both the hardware complexity and the delay for a parallel circuit is much less than that with the parallel operation of H conventional circuits. Although the only problem with this parallel architecture is that the encoder's critical path length increases with H, the proposed architecture is more efficient than a setup using H conventional circuits for high data rate applications. It is also suggested that a parallel Reed-Solomon encoder and decoder, which can keep up with optical transmission rates, i.e., several giga bits/sec, could be implemented on one LSI chip using current CMOS technology.
On Bayes Coding in the case of limited memories
NOMURA Ryo, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 96 ( 161 ) 31 - 36 1996.07
View Summary
Recently, Bayes Coding is one of the most active subject in the research of universal coding. Bayes Coding is constructed based on statistic decision theory, and it determines the coding probability in the point of minimization of mean redundancy. Though, in the Bayes Coding algorithm for FSMX sources, the coding probability is calculated using a context tree it needs enormous memories. In this paper, first we show the lower bound of mean code length in limited memories. Secondly, we propose Bayes Coding algorithm under limited memories, and investigate its performance.
A study of Ziv-Lempel Algorithm from the view point of Bayes Code
KAKU Nikkou, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 96 ( 161 ) 37 - 42 1996.07
View Summary
Ziv-Lempel algorithm and Bayes algorithm are wellknowm as universal code. In recent years it is obvious that Ziv-Lempel algorithm supposes the probability distribution of information source implicitly. The relation between ZL code and Bayes code has been discussed from some three points of view[2]. From previous reseaches it is known that Bayes code surpasses ZL code concerning compresion rate. On the other hand ZL code surpasses Bayes code concerning complexity. So in this paper, from the three points of view, we consider the relation between ZL code and Bayes code by introducing the experimental algorithm.
不確実性を持つ概念の質問による学習に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
人工知能学会全国大会(第10回) 75 - 78 1996.06
An Analysis on Difference of the Codelengths between Codes Based on MDL Principle and Bayes Codes
GOTOH Masayuki, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 96 ( 53 ) 13 - 18 1996.05
View Summary
The minimum description length (MDL) principle which was proposed by J.Rissanen has been studied not only in the universal source coding but in the areas on data analysis, learning theory and the statistical model selection. The MDL is the method to minimize the total description length of the data and that of a probabilistic model, and is closely related to Bayesian statistics. This is because the MDL assumes implicitly the prior distribution. On the other hand, the Bayes coding is the method which uses mixture of all models in model class for coding function. The Bayes code is obtained by the Bayes optimal solution for the code length. Therefore, if we can assume the same prior distribution in both coding, it is clear that the Bayes code is superior to the code based on MDL principle. In this paper, we shall analysis the difference the codelengths between the MDL and the Bayes codes and show that the Bayes code is superior to the code based on the MDL principle.
Parallel Architecture for Signature Analyzer in LSI Self-Testing
MATSUSHIMA Tomoko K., MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
Technical report of IEICE. ICD 96 ( 21 ) 85 - 92 1996.04
View Summary
Several kinds of LFSR-based signature analyzers have been proposed for LSI built-in self test. MLFSR, which has been proposed for multiple output CUT by Pradhan et al., has the possibility to achieve the better aliasing probability than other schemes such as the multiple MISR scheme with comparable hardware complexity. The only problem with this scheme is that the hardware complexity becomes larger than propotionally as δ increases, where δ is the number of parallel signals inputted into MLFSR, and so MLFSR is not adequate for testing CUTs with large number of output signals. In this report, we propose a parallel architecture for signature circuits applicable to such CUTs. This architecture allows Hδ bits to be processed in parallel, where H can be chosen as an arbitrary integer. It is shown that the complexity for the parallel circuit, with Hδ input, signals is much smaller than that for a parallel opereation of H converntional circuits with δinput signals.
A formulation by minimization of differential entropy for optimal control system
M Gotoh, S Hirasawa, N Tawara
IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES E79A ( 4 ) 569 - 577 1996.04
View Summary
This paper proposes a new formulation which minimizes the differential entropy for an optimal control problem. The conventional criterion of the optimal regulator control is a standard quadratic cost function E[M{x(t)}(2) + N{u(t)}(2)], where x(t) is a state variable, u(t) is an input value, and M and N are positive weights. However, increasing the number of the variables of the system it is complex to find the solution of the optimal regulator control. Therefore, the simplicity of the solution is required. In contrast to the optimal regulator control, we propose the minimum entropy control which minimizes a differential entropy of the weighted sum of x(t) and u(t). This solution is derived on the assumptions that the linear control and x(t)u(t) less than or equal to 0 are satisfied. As the result, the formula of the minimum entropy control is very simple and clear. This result will be useful for the further work with multi variables of simple control formulation.
情報通信システムの設計理論に関する一考察
電子情報通信学会情報通信倫理研究会 FACE96;28 1996
On Fixed Delay Decoding for ARQ Scheme using the Viterbi Algorithm
1996年情報理論とその応用シンポジウム 1996
BCH限界を越える誤り訂正復号器を用いた軟判定復号法
1996年情報理論とその応用シンポジウム 1996
On Minimum Distance of Concatenated Codes and Decoding based on True Minimum Distance
1996年情報理論とその応用シンポジウム 1996
符号化を用いたデータ検索について
1996年情報理論とその応用シンポジウム 1996
連接符号の最小距離に関する一考察
電子情報通信学会1996年ソサエティ大会 1996
情報通信倫網領試案とその解説-品質保証について
電子情報通信学会1996年ソサエティ大会 1996
Berlekamp-Massey 復号法を用いたBCH限界以上の復号法の計算量について
電子情報通信学会情報理論研究会 IT96;19 1996
統計的モデル選択問題における選択誤り確率に関する一考察
電子情報通信学会情報理論研究会 IT96;11 1996
強雑音通信路における棄却した受信系列を用いた判定帰還について
電子情報通信学会情報理論研究会 IT96;2 1996
符号化技術とその応用
電子情報通信学会関西文部講演会 1996
符号化技術とその周辺-研究雑感
電子情報通信学会1996年ソサエティ大会 1996
符号化技術と情報数理
平澤茂一
電子情報通信学会会誌 79;7 703 - 706 1996
A Study on Parallel Encoder and Decoder for Cyclic Codes
MATSUSHIMA Tomoko K., MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 95 ( 347 ) 23 - 28 1995.11
View Summary
In this paper, a parallel encoder and decoder architecture for cyclic codes is presented. This architecture can decrease encoding and decoding time 1/H(H > 1) compared to conventional codec, where H is the number of symbols processed in parallel. As an example, we investigate hardware complexity for a (255,251) Reed-Solomon code over GF(2^8). It is shown that hardware complexity for a set of parallel encoder and decoder is much smaller than that for a parallel operation of H conventional sets.
質問を許す概念学習に関する一考察
浮田 善文, 松嶋 敏泰, 平澤 茂一
第18回情報理論とその応用シンポジウム予稿集 537 - 540 1995.10
知識情報処理の基礎とその応用研究部会
平澤 茂一, 松嶋 敏泰
経営システム = Management systems : a journal of Japan Industrial Management Association 5 ( 2 ) 140 - 142 1995.08
Notes on Superimposed Codes on the basis of a probabilistic method
SATAKE Kenji, KASAHARA Masao, HIRASAWA Shigeichi
IT95-31 95 ( 175 ) 37 - 41 1995.07
View Summary
In this paper, we present the results of analyses on superimposed codes constructed based on a probabilistic manner. The superimposed-codes have a feature which is superior to the conventional one not only in the code rates but also in the decoding complexity. The results of analyses show that the codes with larger rates by a factor of approximately 3 compared with the conventional one, exist under the same value of the probability of decoding error and the same level of the decoding complexity. Finally the table of the efficient codes parameter is given.
An Idea In The Field Of Markov Decision Processes With Unknown Transition Probabilities
MAEDA Yasunari, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 95 ( 145 ) 25 - 30 1995.07
View Summary
In this paper, we propose a new algorithm in the field of markov decision processes with unknown transition probabilities. The decision algorithms are evaluated by only a criterion of convergence or rewards in previous researches. The proposed algorithm is considered from both criteria. The effectiveness of our algorithm against previous algorithm is shown by some simulations.
A Test for Psuedo-Random Numbers Using the Bayes Coding
BAMBA Masakazu, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 95 ( 145 ) 37 - 42 1995.07
View Summary
The most useful method to get some randomness in computational simulation is to use psuedo-random numbers. This paper gives a new approach, using Bayes Coding Algorithm. The proposed method tests some traditional tests at once. This paper comes from the Information Theory, likes a test using Lemple-Ziv algorithm. But, we test not only redundancy. Our test, using Bayes Coding Algorithm, can also analize the probability of each sequence.
A study of data compaction by Bayes Coding
SHIMIZU Atsushi, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 95 ( 145 ) 31 - 36 1995.07
View Summary
Among the universal coding problems, Bayes Coding is one of the most active subject in the research field of recent data compaction. The Bayes Coding Algorithms for FSMX sources proposed in the recent researches use a fixed-depth context tree. In this paper, we study the compression ratio of the newly proposed algorithm which uses a variable-depth context tree, and investigate the difference of redundancy caused by prior probability distribution of prameters.
A Bayes coding algorithm for Markov models
MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
IEICE technical report. Information theory 95 ( 79 ) 1 - 6 1995.05
View Summary
The optimal universal code for FSMX sources with respect to Bayes redundancy criterion is deduced under the condition that the model, the probabilistic parameters and the initial state are unknown. The efficient algorithm of the optimal code is also proposed by using a context tree. Although the context tree weighting(CTW) algorithm needs the context in the initial situation and can not treat the infinite depth context tree, this algorithm is free from these problems. The algorithm is not only Bayes optimal for FSMX sources but also asymptotically optimal for a stationary ergodic sources. Moreover the algorithm is regarded as a generalization of the Ziv-Lempel algorithm.
選言的論理プログラミングの効率的推論法について
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
Reed-Solomon符号の軟判定復号法について
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
多重符号化を用いた判定帰還方式について
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
ベイズ決定理論に基づくデータ解析に関する一考察
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
観測雑音を考慮した不確実な知識の推論法について
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
On the Optimum Leaning Algorithm for Boolian Functions and the Learnability based on Baysian Statistics
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
ユニバーサル情報源符号化アルゴリズムに基づく事後確率推定アルゴリズム
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
巡回符号の並列符号化器に関する検討
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
偶数最小距離をもつ連接符号の復号法
第18回情報理論とその応用シンポジウム資料集/情報理論とその応用学会 1995
線形ブロック符号の最大復号法に関する一考察
信学会情報理論研究会資料/電子情報通信学会 IT95-22 1995
情報源モデルのパラメータ推定とZiv-Lempel符号について
信学会情報理論研究会資料/電子情報通信学会 IT95-21 1995
String Matchingを用いた有歪み圧縮
信学会情報理論研究会資料/電子情報通信学会 IT95-20 1995
2元巡回符号のバースト誤りに対する重畳による復号法
信学論A/電子情報通信学会 J78-A, 6 1995
数理情報科学事典
朝倉書店 1995
Disk Allocation Methods for Cartesian Product Files by Using Error- Correcting Codes
Kohnosu Toshiyuki, Nishijima Toshihisa, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 13 - 17 1994.07
View Summary
Allocation methods for Cartesian product files on multiple disks by using linear error-correcting codes are proposed.This allocation methods can be apply to queries in which the characteristic of each attribute is not even.The structure of Reed- Solomon codes over GF(p^m),which give the strictly optimal allocation when all possible partial match queries are equiprobable,and the mapping by the basis for GF(p^m)over the subfields is used.As a result,a file is constructed by records which has the attributes in which the domain is partitioned by different number.The properties of this method are described.
A Note on Inductive Learning of Probabilistic Model
Abe Shinji, Mukouchi Takafumi, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 43 - 48 1994.07
View Summary
This paper discusses an inductive learning method in probabilistic model which is applied to diagnostic system.In this system empirical knowledge is learned from given data as the probability distribution of the failures,and tests are selected according to this knowledge.We use presumption tree to represent the probability distribution,and adopt the MDL principle to select a presumption tree as themost approximate one.But,since the probability distribution becomes complicated,complexity of model selection increases exponentially with number of attributes So it requires an effcient search technique. In this paper,we propose a new effcient algorithm to search the optimal presumption tree which minimize it′s description length.An d we show that our algorithm gives a supplement of the weakness of the conventional algorithm.
A Note on the Construction Method of Decision Trees.
Umezawa Katsuyuki, Niinomi Toshihiro, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 49 - 54 1994.07
View Summary
The technique which derives a general rule to explain the given examples called an inductive learning.In this paper,we discuss on the decision tree as a method for the expressing the knowledge obtained by learning though there are various forms such as the formal.language,the predicate logical expression,and PROLOGprogram. The ID3 is proposed by J.R.Quinlan as a method for constructing the decision t e.However the correlation of two or more attributes can not take into account.because only one attribute paid attention in each step of the generation process of the tree.As a result,it is not guaranteed that the average number of questions using the generated rule is minimized.We propose a new algorithm which enables to consider the relationship between two or more attributes in each step of the generation process of the tree.And it is shown a more efficient decision tree can be constructed.
On the Complexity of Hypothesis Space and the Sample Complexity for Machine Learning
Nakazawa Makoto, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 37 - 42 1994.07
View Summary
The problem of learning a concept from examples in the model introduced by Valiant is discussed.According to the traditional ways of thinking,it is assumed that the learnability is independent of theoccurence probability qf instance.By utilizing this probability,we propose the metric as a new measure to determine the complexity,of hypothesis space.The metric measures the hardness of discrimination between hypotheses. Furthermore,we obtain the average metric dependent on prior information.This metric is the measure of complexity for hypothesis space in the average.Similarly in the worst case,we obtain the minimum metric. We make clear the relationship between these measures and the Vapnik-Chervonenkis(VC)dimension.Finally,we show the upper bound on sample complexity utilizing the metric.This results can be applied in the discussion on the learnability of the class with an infinite VC dimension.
On Efficient Maximum Likelihood Decoding with GMD decoding
Kobayashi Manabu, Ogino Atsushi, Kohnosu Toshiyuki, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 61 - 66 1994.07
View Summary
A decoding algorithm which executes maximum likelihood decoding(MLD)by using bounded distance decoding is known as a soft- decision decoding.However a problem is to reduce complexity of bounded distance decoding to execute MLD.On the other hand,GMD decoding is well known as an apfioximate of MLD.Recently GMD decoding in which an algebraic decoding is executed only once has been developed.In this paper we propose efficient MLD algorithm by using GMD decoding for several times.We also show that complexity for calculating syrifirome can be reduced when we use algebraic decoding several times required for soft-decision decoding. Moreover we provide an efficient algorithm to generate the sequences to be decoded to a maximum likelihood Cpdeword more efficiently.We show that the decoding complexity can be considerably reduced in the computer simulations.
A Note on Update of Uncertain Knowledge
Kawamata Hidenori, Nakazawa Makoto, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 171 ) 55 - 60 1994.07
View Summary
In the research area on knowledge information processing, knowledge acquisition from experts becomes an important problem to construct efficient expert systems.On the other hand,the method for learning rules from examples is also important.Using rules given from a finite set examples,we should represent uncertain knowledge with range of certain factors.If we can combine knowledge given from experts with knowledge given from examples,we can establish more reliable knowledge.In this paper,we propose a new method that update the knowledge based on experts by using examples,provided that additional examples have noises in the observation.
INDUCTIVE AND DEDUCTIVE INFERENCE BASED ON INFORMATION THEORY
Hirasawa Shigeichi, Matsushima Toshiyasu
IEICE technical report. Information theory 94 ( 145 ) 73 - 85 1994.07
View Summary
A fundamental and total model for deductive and inductive inference is proposed from the view points of decision theory and information theory.Deductive and inductive inference are regarded as the decision problems in the proposed fundamental model.The inference problems of induction and deduction are represented by the decision functions and the loss functions.The optimal procedures of the inference problems are given by decision theory. The average risks of the optimal procedures are evaluated by using information theory.
On an Uncertain Knowledge Representation Using a Certainty Factor and a Confidence Factor and its Reasoning Algorithum
SATO Jun-ichi, MATSUSIMA Toshiyasu, HIRASAWA Shigeichi
8 25 - 28 1994.06
An Inference from a Contradictory Knowledge
SAITO Mikiya, MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi
8 63 - 66 1994.06
PROLOGを対象とした帰納的学習の効率化
浮田 善文, 松嶋 敏泰, 平澤 茂一
人工知能学会全国大会(第8回) 137 - 140 1994.06
On Asymptotic Capabilities of Concatenated Codes and Iterated Codes
NISHIJIMA Toshihisa, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers. A 77 ( 5 ) 786 - 793 1994.05
On coded ARQ schemes cumulating the rejected sequences
Niinomi Toshihiro, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 94 ( 35 ) 37 - 42 1994.05
View Summary
On ARQ schemes,using VA(Viterbi algorithm)and cumlating the rejected sequences to make use to the powerful decision for the next repeated,it is one of approaches that eliminating only the non-reliable segments from maximum likelihood path over the trellis block,chosing by the likelihood ratio testing at the each two pathes unmerged.Then,the decoder selects the only rebable segments form each blocks repeat requested.On these cases,the procedure of VA with some earsure option is not the single stage testing like Yamamoto£1!′s scheme,but the sequntal tesing problem s.In this paper,we disccus the sequential testing,espacially using VA with repeat request the rejected sequences cumulating. £1!H.Yamamoto and K.Itoh,″ Vitebi decoding algorithm for convol utional codes with repeat request″ IEEE Trans.Inf.Theory,vol.IT-26 ,pp540-547(Sep.1980).
On Knowledge Representation and Reasoning System with a Range of Certainty Factors
SUZUKI MAKOTO, MATSUSHIMA TOSHIYASU, HIRASAWA SHIGEICHI
IPSJ Journal 35 ( 5 ) 691 - 705 1994.05
View Summary
We discuss the method for representing uncertain knowledge information and also on a reasoning system that guarantees a given reliability based on information theory and probability theory. In this paper, we propose a new reasoning system which consists of inductive and deductive inferences, by using a knowledge representation method with a range of certainty factors. In inductive inference, a new atom is computed from the observed data or facts, together with the upperbound and lowerbound between which its probability must fall. Also in deductive inference, which includes negation, conjunction, disjunction, chain and modus ponens, a new atom is deduced with a range of certainty factors computed from the results of inductive inference. Properties and behaviors of the system are clarified. It can be concluded that the new reasoning system is effective of conventional expert systems.
Exponential Error Bounds for Codes based on Plotkin's Construction
Hirasawa Shigeichi, Kasahara Masao
IEICE technical report. Information theory 93 ( 381 ) 7 - 11 1993.12
View Summary
The Plotkin′s construction methods,or ds are known to be an effective for obtaining efficient codes by using previously known codes.The present authors have proposed the new codes C constructed on the basis of the Plotkin′s construction and their decoding algorithm. In this paper,we discuss on exponential error bounds for the codes C from the random coding arguments.We show that the codes C are superior compared to the conventional single stage codes from the view-point of error exponents for rates below capacity. Throughout this paper,channels are assumed to be the binary input additive white Gaussian noise channels with the signal to noise ratio p.
Learning formal languages from Feasible Teachers
Journal of Japan Industrial Management Association 44 ( 3 ) 245 - 245 1993.08
Capabilities on the Probability of Undetected Error for Constructive Itenated Codes
Nishijima Toshihisa, Hirasawa Shigeichi
IEICE technical report. Information theory 93 ( 164 ) 93 - 99 1993.07
View Summary
In practical applications of coding theory,e.g.,Signature analysis,ARQ system,etc.,analyses on the probability of undetected error for binary linear block codes plays an improtant part.It has been known that the probability of undetected error for the binary linear block codes having a symmetric weight distribution converges exponentially to zero with code length n.In this paper, we discuss about iterated codes having a symmetric weight distribution used as error detecting codes.It is shown that the probability of undetected error for the iterated codes also converges exponentially to zero with code length n,and the complexity of the operation to detecte an error for the iterated codes is simpler than that for the ordinary codes.
On block coded ARQ schemes cumulating the rejected information
Niinomi Toshihiro, Matsushima Toshiyasu, Hirasawa Shigeichi
IEICE technical report. Information theory 93 ( 164 ) 101 - 106 1993.07
View Summary
The ARQ is an error control scheme such that,if the receiver decides that the received information is not reliable,he can request the same information repeated.Some kinds of ARQ schemes do not disacard non-reliable informaion,but cumulating it to make use of powerful decision if the next received sequence should be decoded or not.This idea can be considered as a sequential decision with the cumulating rejected information.The well-Known Wald′s sequntial ratio test,which can make most powerful decision if sample size is variable,have the same principle like these kinds of ARQ.In this paper,we try to evaluate the ARQ cumulating rejected information with the random coding argument like Forney£7 !′s as possible.So we study the error exponent of decoding the 2nd inform under the condision of the 1st inform is rejected.
A Note on Learning of Layered Neural Networks : Behavior of Weight Matrix in Autoassociative Recall
Matsubara Noriaki, Otani Hideyuki, Hirasawa Shigeichi
IEICE technical report. Information theory 93 ( 164 ) 37 - 42 1993.07
View Summary
Autoassociative recall is the first model that was realized as an associative recall in neural networks.Although the capability has been studied,it is not enough to analize the dynamics in the recall process of the reasonable patterns.In this paper we discuss the behavior of the wight matrix in Autoassociative Recall,and suggest some pro cedures to avoid the situation caused by the weight matrix that makes the recall impossi ble.
A note on construction of information networks considering reliability
Shinoaki Michio, Niinomi Toshihiro, Hirasawa Shigeichi
IEICE technical report. Information theory 93 ( 164 ) 25 - 30 1993.07
View Summary
Information networks can be represented by graphs in which the set of vertices corresponds to the nodes in the networks and that of the edges corresponds to the links.Based on such a model,the efficient construction of information networks,for which the transmisson delay is short and the realiability is high,has been studied from graph theorical approach.This problem is to construct a network with high realiability(node connectivity k)and quasiminimal maximum transmission delay(diameter),when the number of nodes(vertices n)and that of the neighboring nodes(degrees k) are given.Original studies have been devoted to obtain high realiability sacrificing the long delay.In this paper,we propose a new method for constructing of information networks for which the delay is shorter in a limitation and show the effectiveness of the proposed method.
A Note on Model Inference
Mukouchi Takafumi, Nakazawa Makoto, Nakata Norimasa, Hirasawa Shigeichi
IEICE technical report. Information theory 93 ( 164 ) 31 - 36 1993.07
View Summary
Shapiro′s model inference system can infer the only model which can represent with first order language given in advance.But at the view point of concept learning,it is not natural situation to set the model which includes theoretical terms.It seems that creating or discovering theoretical terms is the essential probrem in machine learning,but it′s very hard.Although some methods of cr eating theoretical terms are proposed,they have no guarantee to identify a concept in the limit.In this paper,we extend the Shapiro′s system and show that it is possible to identify a concep t in the limit at the system which creates theoretical terms.
On a Burst Error Correcting Algorithm for Binary Expanded Reed-Solomon Codes
KOHNOSU Toshiyuki, NISHIJIMA Toshihisa, HIRASAWA Shigeichi
The Transactions of the Institute of Electronics,Information and Communication Engineers. A 76 ( 4 ) 656 - 662 1993.04
Inductive Inference for Uncertain Formulas from the Viewpoint of Information Theory
MATSUSHIMA Toshiyasu, INAZUMI Hiroshige, HIRASAWA Shigeichi
IPSJ Journal 33 ( 12 ) 1461 - 1473 1992.12
SUZUKI Joe, OHDAKE Yasutaka, HIRASAWA Shigeichi
Transactions of Information Processing Society of Japan 33 ( 11 ) 1281 - 1289 1992.11
Inductive Inference and Description Length
MATSUSHIMA Toshiyasu, HIRASAWA Shigeichi, Toshiyasu Matsushima, Shigeichi Hirasawa, Dept. of Management Information Yokohama College of Commerce, School of Science and Engineering Waseda University
7 ( 4 ) 615 - 621 1992.07
Fast Algorithm for Generating Candidate Codewords in Reliability-based Maximum Likelihood Decoding
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-13 73 - 78
A study of Reliability Based Hybrid ARQ Scheme with Bitwise Posterior Probability Evaluation from Message Passing Algorithm
Daiki Koizumi, Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-24 135 - 140
A Note on the Construction of Nonlinear Unequal Orthogonal Arrays from Error-Correctig Codes
Tomohiko Saito, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-15 85 - 90
Information theoretic consideration of multi-layered inventory process,
M. Goto, T. Masui, S. Hirasawa, N. Tawara
Proc. of the 18-th Inr. Conf. on Prod. Res., Italy,
Exponential Error Bounds and Decoding Complexity for Block Concatenated Codes with Tail Biting Trellis Inner Codes
Shigeichi Hirasawa, Masao Kasahara
to appear in The Journal of Discrete Mathematical Science and Cryptography.
Calculation Method of Low Weight Distribution for Parallel Concatenated Block Codes
M. Kobayashi, T. Matsushima, S. Hirasawa
Proceedings of International Symposium on Information Theory and its Applications ,vol1, 829 - 834
Fast Algorithm for Generating Candidate Codewords in Reliability-based Maximum Likelihood Decoding
Hideki Yagi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-13 73 - 78
A study of Reliability Based Hybrid ARQ Scheme with Bitwise Posterior Probability Evaluation from Message Passing Algorithm
Daiki Koizumi, Naoto Kobayashi, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-24 135 - 140
A Note on the Construction of Nonlinear Unequal Orthogonal Arrays from Error-Correctig Codes
Tomohiko Saito, Toshiyasu Matsushima, Shigeichi Hirasawa
Proceeding of 2005 Hawaii IEICE and SITA Joint Conference on Information Theory (HISC2005) IT2005-15 85 - 90
Information theoretic consideration of multi-layered inventory process,
M. Goto, T. Masui, S. Hirasawa, N. Tawara
Proc. of the 18-th Inr. Conf. on Prod. Res., Italy,
Exponential Error Bounds and Decoding Complexity for Block Concatenated Codes with Tail Biting Trellis Inner Codes
Shigeichi Hirasawa, Masao Kasahara
to appear in The Journal of Discrete Mathematical Science and Cryptography.
Calculation Method of Low Weight Distribution for Parallel Concatenated Block Codes
M. Kobayashi, T. Matsushima, S. Hirasawa
Proceedings of International Symposium on Information Theory and its Applications ,vol1, 829 - 834
ディジタルライブラリーに関する基礎研究
アメリカ カリフォルニア大学
2008 後藤 正幸
View Summary
情報検索システムの数学的モデルを用いて,多くの情報クラスタリングや情報分類手法が研究されている.本研究では,主として文書のクラスタリングと分類問題を対象に,最適なアルゴリズムの構築を目指した. 数年前から開始した学生アンケート分析において,学生のクラス分けに用いる手法としてProbabilistic Latent Semantic Indexing (PLSI)モデルを利用した方式を提案し,実問題に適用してきた.提案方式の性能は,従来のベクトル空間モデルやLSIモデルなどに比べ優れたものであり,有効な分析結果を導いてきた.しかし,従来方式に対し比較的良好な性能を実現できたものの,学生アンケートから得られる文書数は学生数に等しく,したがって30-150程度の比較的小規模の文書集合で特に性能の良いクラスタリング・分類方式が必要である. そこで,学習文書数が小さな場合でも良い性能が期待できるベイズ決定理論を用いたアルゴリズムに着目し,その構築を図った.主成分分析に基づく単語文書行列の次元を圧縮することにより単語の共起を考慮しつつ行列のスパースネスによる雑音を除去し,圧縮次元数の混合分布を用いた新しいアルゴリズムを提案した.これは従来の圧縮しない場合と圧縮次元を固定とする場合を含み,自動的に最適な次元数を求める機構を含んでいる.しかし,残念ながら学習文書数の小さな領域では理論的に最適性を示すことは難しく,実験による比較に止まっている.
2005
View Summary
次の各項を研究目的とした.それぞれに得られた結果・成果,残された課題を記述する.(1) 潜在的意味空間インデクシング(LSI)モデルの評価 LSIモデルは文書ベクトルが作る空間の代数的な手法(Singular Value Decomposition)による次元圧縮モデルである.情報処理学会が提供するベンチマーク文書集合データに適用した結果,分類性能は(圧縮前の)ベクトル空間モデルより優れるものの,その改善度合いは小さく,次に述べるPLSIモデルに劣ることが判明した.ただし,ベンチマークデータは約5000文書の小規模なもので,大規模で検索システムへの適用は今後の課題である.(2) 確率的潜在意味インデキシング(PLSI)モデルの拡張 代数的な次元圧縮を確率的手法に置き換えたPLSIモデルに基づく分類アルゴリズム・クラスタリングアルゴリズムを提案し,その評価を行った.(1)で述べたベンチマークに適用し,文書集合の規模が大きくなるに従い性能は劣化するが,小規模な文書集合には良い性能を示すことを明らかにした.提案アルゴリズムはある条件の下にベイズ最適になっており,振る舞いをベイズ決定理論に基づいて再構築し,最適な隠れ状態数を決定することは今後の課題である.(3) 選択型質問と記述型質問が混在するアンケートからの知識発見 (2)で確認したPLSIモデルの良好な性能を,実データとして収集した授業改善のためのアンケート分析に用いた.担当する学部2年必修科目「コンピュータ工学」に適用した.授業モデルを示し,アンケートから得た学生の顕在的・暗黙的特性を入力と,学生の成績・満足度(これもアンケートより抽出する)を出力とするモデルの因果関係を明らかにした.残念ながら授業開始前のアンケート結果からのみで「クラス分け」などの授業運営方法を導くことは不可能であるが,講義の進め方・中間テストや演習に対する学生の考え方などの授業運営に有効な知識が得られることを示した.また,昨年春に行ったMNCのWEB科目登録のアンケート分析にも利用し,学生の要望を抽出した.(4) 台湾における学生アンケートの実施と分析 本学で実施したアンケート分析結果を2006年春に台湾で開催された国際セミナーと淡江大学の集中講義で発表・紹介した.これを機会に,本年度台湾の学生からもアンケートを求めることにした.日台共同で日本語と中国語の言語横断検索システムの研究をスタートさせ,アンケート分析に取り組むことになっている.
2004
View Summary
文書データベースを対象に情報検索・分類・クラスタリングについて検討した.特に,確率モデルに分類されるPLSI(Probabilistic Latent Semantic Indexing)モデルを用いたシステムの評価と応用を行った.(1)「PLSIモデルを用いた文書分類・クラスタリング法」は小規模のデータベースに対し優れた性能を持つ[1](2)「PLSIモデルを用いた情報検索システム」はベクトル空間モデルを用いたものとほぼ同等の性能を持ち,計算量削減が可能 である[5](3)「学生の授業改善アンケート解析」に重要文抽出・文章要約・PLSIに基づく文書クラスタリング法を適用した結果,統計処理を 併用することにより学生の授業満足度・最終成績などを説明する有効な指標を得た[4] PLSIモデル以外のモデル(主としてベクト空間モデル)についても検討し(4)情報検索・分類・クラスタリングの各手法の性能改善を行った
2004
View Summary
情報理論や符号理論で取り扱う符号化技術について,その基本となる誤り訂正符号とデータ圧縮について検討した.その結果(1)「テールバイティングトレリス符号を用いたブロック連接符号」は従来のブロック連接符号に比べ,同一の復号器の複雑さで大 きな誤り指数を持つ[2](2)「置換生成行列を用いた最尤復号法」において,候補符号語の生成に要する計算量を低減できるアルゴリズムを示した [1][3][14](3) ①「記憶のある通信路における低密度パリティ検査符号」の構成法を明らかにし[4],また ②「消失通信路における最尤復号法」を示した[15](4)「単語単位に出力される情報源系列の符号化法」について,シンボル単位に出力されたと見たときの単語の区切り(日本語の形 態素解析に相当する)に関する性能の評価を行った[19]などの成果を得た.(4)は本研究の符号化情報学の視点に立っている. 一方,これらの技術がインターネットなどネットワーク社会で活用される場面を想定し(5)遠隔地をネットワークで結び少人数のゼミ(ネットゼミ)を実施する「インターネットを用いた電子会議システム」の評価を 行った.日本と台湾を結び「環境保護に関する日台国際討論会」を開催する[7]と同時に,インターネットの回線品質(遅延 時間とパケットロス率)をパラメータにゼミが成立する条件を明らかにした[5][12].
メディアネットワーク社会における知識発見と研究活動支援システムの構築
2002
View Summary
メディアネットワーク社会の急速な発展に対応して,人間の高度で知的な情報処理の一部を代行するシステムの構築のために必要な基礎と応用を扱う.本研究課題は次の2項からなっている.(1)知識獲得 大規模データベース・文書アーカイブやネットワーク上に散在する情報から,人間の知的活動を支援するために隠された知識を 獲得する.そのいために従来,数理的モデルに基づき理論的に性能が保証できるアルゴリズムや解決法を研究してきた.しかし ネットワーク社会の進展は多くの電子化された情報の蓄積を容易にし,散在する安価なコンピューティングパワーが利用できる 環境では従来とは異なるアプローチが必要である.残念ながらマシーンラーニングなど理論的に解明するタイプの研究には取り 扱えるデータ量に限界があり,多くの実在する問題に適用するのが難しい.そこで,本研究では発見的アルゴリズムを中心にデ ータマイニング・テキストマイニング,さらに情報検索技術を応用し大量データからの知識獲得について考察した.成果は下記 論文として発表した.(2)インターネットを用いた研究支援環境の構築 テキストマイニングを用いた検索エンジンと電子会議型のソフトによるインターネットを用いた研究支援環境を構築した.前者 はプライベート論文検索システムを実現し,後者により空間的に離れた共同研究者と文献・資料を検索・共有しながら議論・ゼ ミを行うことが可能になった.インターネットという不良な回線品質の状況で,ゼミが成立するための条件の評価を行い,成果 を報告した.なお,本成果は特別研究期間中に大変有効利用できたことを付け加えておきたい.
2000
View Summary
今年度目標とした各項目に対し、下記の結果を得た。1.データマイニング 遺伝的アルゴリズムとk-means法を用いたクラスタリング技法、属性間の関連を考慮した帰納推論について、それぞれ新しいアルゴリズムの提案と性能の評価を行った。いずれも日本経営工学会春季・秋季発表大会で報告した。また、大規模データベースにおける多値属性を考慮した相関ルール抽出法について検討し、従来2値の場合しか処理できなかったものを拡張した。結果は人工知能学会で発表した。2.分散直積ファイルのディスク配置問題 符号理論を応用した効率のよい配置法について検討した。Reed-Solomon符号を用いるとエラスティック(ファイルの数が大きくなる程、わずかな平均アクセス時間の増大を許容すれば大きくディスクの数を減らすことが可能)であることを示した。国際会議で報告した。3.情報検索システム 情報検索システムへの符号化技術の応用としては進展がなかった。近接分野の取り組みとして、これに近い視点からテキストマイニングについて取り上げた。新聞記事を対象にクラスラリングを効率よく行う方法、日本語のキーワード検索において効率よく合成語を見つけ出す方法について検討している。4.その他 符号理論のプロパーなテーマを取り上げた。A*アルゴリズムを用いて計算量を押えながら最ゆう復号を実行する方法、BCHたたみこみ符号の新しい構成法を提案し、電子情報通信学会に報告した。前者は論文にまとめ信学会に投稿した。
1998
View Summary
本研究は符号化技術について(1)データ圧縮を行う情報源符号化(2)誤り訂正を行う通信路符号化について基礎的な考察と同時に情報システムの応用を図ったものである。(1)情報源符号化 (a)基礎的考察……マルコフ情報源に対し誤り伝播をおさえることができる符号化と、部分的に復号可能なZL符号の拡張について考察した (b)応用……統計的モデル推定問題はユニバーサル情報源符号化の情報源モデルの推定と同じように考えることができる。ここでは、パラメータが一時間と共に変わるようなモデルが入力されたときのパラメータ変化量を推定する方法と、ベイズ符号の近似アルゴリズムを用いて木構造のモデル族の予測方法について考察した。(2)通信路符号化 (a)基礎的考察……2元BCH符号についてわずかな計算量の増大でBCH限界を越える復号法を示し、これを用いて近似最尤復号への適用を図った。また、たたみ込み符号を用いたARQ方式についてリスト復号を行うことにより従来の復号誤り確率,を低減する方法について考察した。 (b)情報検索における転置ファイルの圧縮について2段階符号化を行うことにより復号・検索が容易になる方法を示し評価した。また並列化したBCH符号化によりLSIの組込みテスト回路の性能を向上させる方式を示した。
1997
View Summary
本研究は符号化技術について符号理論、情報理論から考察しその応用を図ったものである。対象とする符号化技術は大別にデータ圧縮を行う情報源符号化と誤り訂正を行う通信路符号化である。 通信路符号化については通信符号の真の距離が増加する場合の条件とその復号法を与えている。また、BCH符号の最小距離を越える復号法を示し軟判出復号法への応用を図っている。不均一誤り訂正符号についても軟判定復号法を示し誤り確立の低減する方法を明らかにしている。 情報源符号化についてはTunstall符号やFG符号の改良を行い圧縮効率をなるだけ低下させないで途中からの復号を可能とする方法、歪を許した圧縮のアルゴリズムを明らかにし実データに応用したときの評価を行っている。また情報源符号化の応用として、木構造をもつデデuの学習アルゴリズムに用い計算量低減を図っている。研究成果の発表1997年5月"On the minimum distance of binary concalenaled codes" IFICE Trons. E80-A, May 1997. pp.922-923.“An improvement of soft-decision maximum likelihood decoding algorizhm using hand-decision bounded distance decoding,”IEEE Trons. IT-43, July 1997. PP. 1314-1319“Berlekamp-Massey アルゴリズムを用いたBCH限界を越える復号法の計算量について”、信号論、J80-A、pp. 1554-1558, 1997, 9.1997年11月“On the minimum distance of concatenated cosse and decoding method up to the true minimum distance”, IEICE, Tromo. E80-A, pp. 2111-2116 Nov. 1997.
1996
View Summary
本研究は符号化に関する基礎的・応用的研究に関するものである。基礎的研究としては、最尤復号あるいは近似的最尤復号を実行するアルゴリズムの提案とその評価に関するものがある。最尤復号は復号誤り確率を最小にするが計算量は大きい。これを近似的に実行すると、どの程度の符号誤り確率の変化になるかを評価している。また、連接符号の最小距離の下界を改良する符号化の条件とその復号法を明らかにした。GMD復号法と拡張した方法により、実際の最小距離によりきまる誤り訂正可能個数までの訂正ができることを示した。この他、符号化を用いた判定帰還方式について棄却した受信条列を有効に利用し信頼度を向上させる方式 およびViterbi復号法を用いたたたみ込み符号の固定遅延復号について提案・評価結果を明らかにした。いずれも誤り指数を導出し、同時にシュミレーションによりその動作の確認を行った。 応用的研究としては、高速化を目的とした巡回符号の符号器・復号器の並列化方式の提案とその解析を行った。この構成法を用いてFTCにおけるLSIのビルトインテストに応用する場合を考察し、回路規模を小さくできることを明らかにした。 この他、情報源符号化において受信系例の途中から復号する方式、歪のある情報圧縮における代表元を記憶するリスト作成のアルゴリズムを改良する方式などを提案し、それぞれの性格を評価した。情報源符号化のアルゴリズムと性質を利用した知識情報処理における帰納推論への応用を図った。知識表現として決定木を用いるとき、マルコフモデルを推定しモデル選択を行う問題に応用し有効性を確認した。
1995
View Summary
本研究は情報理論・符号理論に共通の符号化技術に関するもので基礎的研究と応用的研究に分けられる。前者は誤り訂正で実行する新しいアルゴリズムの提案とその評価に関するものであり,後者は主として誤り検出符号のフォールトトレラントシステムの応用である。(1)基礎的研究 誤り訂正を実行する符号アルゴリズムを扱っている。まず,バースト誤りが生起するとき,線形巡回符号の性質を用いて受信系列をずらしたものと重量することにより,誤りを打ち消し合い比較的簡単なアルゴリズムで強力な能力を引き出せることを示した。また,最小距離が偶数の連接符号において,内部符号の復号結果を利用することにより従来検出に止まっていた符号法を改良できることを示した。さらに,線形ブロック符号において,最尤復号をトレリス構造を用いて効率良く実行できることを示した。この他,情報源符号化についても考察している。Ziv-Lempel符号の符号化アルゴリズムを逆にたどることにより,実行しているパラメータ推定の方法を解明した。また,歪のあるデータ圧縮に歪のないデータ圧縮のアルゴリズムを適用して,従来よりさらに圧縮率を上げることが可能であることを示した。(2)応用的研究 LSIなどのBuilt In Self Testでは回路に組み込まれたテスト回路により誤りを検出する。このとき,シグネチャとよぶデータ圧縮技術が用いられる。並列化可能な誤り検出符号の符号器構成を与えることによりテストが高速化できることを示した。また,符号化技術の知識情報処理への応用として,不確実な知識に新しい観測データを加え高信頼化を図るのに有効な判定・評価方法や,パターン認識などに用いられる事後確率の計算法を効率良く実行できる方法を示した。
Copyright (C) 2020 WASEDA University, All Rights Reserved.