Vimal
02-19-2004, 10:51 AM
Hello,
Refering to Training HMM using Baum-Welch; I tried implementing it.
The forward and backward probabilities both tend to zero after moving
in the chain for 10 steps ie I get reducing probabilities and they
become zero after a while. I have refered other paper's too but they
also do not mention anything about underflow for forward and backward
probabilities.
I am confused, is reducing forward (and backward) probabilities as we
move from at each 't' to 't+1' normal operation and is there any way
we can mitigate this.
Best regards,
Vimal
Refering to Training HMM using Baum-Welch; I tried implementing it.
The forward and backward probabilities both tend to zero after moving
in the chain for 10 steps ie I get reducing probabilities and they
become zero after a while. I have refered other paper's too but they
also do not mention anything about underflow for forward and backward
probabilities.
I am confused, is reducing forward (and backward) probabilities as we
move from at each 't' to 't+1' normal operation and is there any way
we can mitigate this.
Best regards,
Vimal