Pengumuman

Pengumuman

Speech Recognition, and The Hidden Markov Model with Scilab: 5-6 February 2015

Speech Recognition, and The Hidden Markov Model with Scilab: 5-6 February 2015

Tarikh : 07 January 2015

Dilaporkan Oleh : WebMaster

Kategori : Announcements


Share

 

Speech Recognition, and The Hidden Markov Model with Scilab

 

 

 

Date:

5-6 February 2015

Venue:

Trity Technologies Training Center, Selangor

 

For more information on course fee, registration form and all details,

kindly provide your contact details and email tina@tritytech.com

 

 

 “Performing speech recognition with Hidden Markov Model with open source software – Scilab. This makes the research more meaningful and practical!”

 

 

Course Synopsis

Speech recognition is the process by which a computer or machine identifies spoken words. The speech recognition system, in general, comprises speech segmentation, feature extraction, and feature matching with a trained library of stored features. Speech segmentation is easily accomplished by segmenting at points where the power of the sampled signal goes to zero. Feature extraction may be done in a variety of ways, depending on the features one chooses to extract. Typically, these are the coefficients that collectively represent the short-time spectrum of the speech signal, such as the mel-frequency cepstral coefficients (MFCCs) or the linear prediction cepstral coefficients (LPCCs). Feature matching is traditionally implemented via dynamic time warping, which provides a means for the temporal alignment of two speech signals which may vary in time or speed. Modern speech recognition systems are, however, based on hidden Markov models (HMMs), developed by Leonard E. Baum and his coworkers in the late 1960s. The HMM is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (or hidden) states. As speech signals are short-time stationary processes, modeling speech signals as HMMs is feasible and offers great advantages over the predecessor.

 

This course is conducted in a workshop-like manner, with a balance mix of theory and hands-on coding and simulation in Scilab. Extensive exercises are provided throughout the course to cover every angle of algorithm design and implementation using Scilab. 

 

 

 

 

 

 

Course Objectives

This two-day course provides a practical introduction to speech recognition and the hidden Markov model. As such, there will be a series of hands-on exercises which are generally aimed to help translate the theoretical models to practical applications.

 

Who Must Attend

Scientists, mathematicians, engineers and programmers at all levels who work with or need to learn about speech recognition and/or the hidden Markov model. No background experiences in either of these topics are required. The detailed course material and many source code listings will be invaluable for both learning and reference.

 

Prerequisites

 A basic knowledge of probability theory, signal processing and Scilab programming is necessary.

 

What you will learn

Basic theoretical concepts and principles of speech recognition, the hidden Markov model and Scilab implementation of the related algorithms.

 

 

 

The course begins with an overview of the speech recognition problem, and a review of some common speech analysis models. The dynamic t ime warping algorithm and the hidden Markov model are then introduced in turn, with the basic principles behind these methods discussed both through theory and practice. Programming examples are provided at the end of each section to help reconcile theory with actual application.

 

Course Outline

 

Day 1

ü      Introduction

ü      The speech signal

ü      The signal classification problem

ü      Speech analysis models

Filter banks, Critical band scales, Linear prediction, Autoregressive models, Homomorphic systems, Cepstral transformation, Cepstral coefficients

ü      Pattern recognition

Distance and distortion measures, Time alignment, Dynamic time warping, Dynamic programming

 

 

 

Day 2

ü      Hidden Markov models

States and observations, State transition probabilities and observation probabilities, The three problems

ü      The evaluation problem

Forward and backward variables

ü      The decoding problem

Viterbi algorithm

ü      The learning problem

Maximum likelihood estimation, Expectation-maximization algorithm, Discrete observation symbols, Continuous observation densities

ü      Implementation issues

Left-right hidden Markov models, The initial estimates

 

Please contact us for more information:

 

Trity Technologies Sdn Bhd 874125-T

26-3 Jalan Puteri 2/4, Bandar Puteri,

Puchong, Selangor, Malaysia

Tel +603-80637737 Fax +603-80637736

Email: tina@tritytech.com

 

Visit us at www.tritytech.com

 

 

Centre for Postgraduate Studies, January 2014