+0

# Working out probability in a continuous time markov chain

0
1307
6  I can do question 1-3 no problem but I'm lost with part 4. Can anyone here explain the steps I need to go through to get this answer in more detail?

Jun 19, 2015

#3
+15

Does this help? .

Jun 19, 2015

#1
+10

Are you sure the answer given for part 1 is correct?  Looks to me like the 3 on the bottom line of Q should be -3.

If you then calculate P(0.1) you get the numbers in the first column in part 4 the same as given in the answer.

.

Jun 19, 2015
#2
+5

These are the answers provided by my lecturer but yes I also think it's a typo and should be -3 rather than 3, forgot to mention that!

My problem is just the fundamentals, how does the calculation for P(0.1) look like? Once I see it I can make sense of it (I hope)

Jun 19, 2015
#3
+15

Does this help? .

Alan Jun 19, 2015
#4
+5

Yeah it does thanks Alan. What's still causing some confusion for me is what is the point of mentioning that the initial distribution for the markov chain is (1/3 , 1/3, 1/3)?

Jun 19, 2015
#5
+10

The state of the Markov chain after a time t depends on the initial state.  The occupancy of state 0 after time t is its initial occupancy (1/3) multiplied by the sum of the probabilities of the transitions to and from that state from the other states (the first column of P(t)).

Notice that the sum of the terms in the first column of P(t) given in the answer to part 4 is divided by 3 (i.e. multiplied by 1/3).

Jun 19, 2015
#6
+5

I understand it now, thanks a lot Alan =)

Jun 19, 2015