作成者 |
|
|
|
|
本文言語 |
|
出版者 |
|
発行日 |
|
収録物名 |
|
巻 |
|
開始ページ |
|
終了ページ |
|
出版タイプ |
|
アクセス権 |
|
権利関係 |
|
関連DOI |
|
関連DOI |
|
|
|
関連URI |
|
|
|
関連情報 |
|
|
|
概要 |
A fundamental task in pattern recognition field is to find a suitable representation for a feature. In this paper, we present a new visual speech feature representation approach that combines Hypercol...umn Model (HCM) with HMM to perform a complete lip-reading system. In this system, we use HCM to extract visual speech features from input image. The extracted features are modeled by Gaussian distributions through using HMM. The proposed lip-reading system can work under varying lip positions and sizes. All images were captured in a natural environment without using special lighting or lip markers. Experimental results are shown to compare favourably with the results of two reported systems: SOM and DCT base systems. HCM provides better performance than both systems.続きを見る
|