ࡱ> od. Our model has four states, for each time period there's a probability of moving to each of the four states. The sum of proba- bilities across each of the rows is 1, since the system either moves to a new state or remains in the current one.; TPROB = .75 .1 .05 .1 .4 .2 .1 .3 .1 .2 .4 .3 .2 .2 .3 .3; ENDDATA ! The model; ! Steady state equations; ! Only need N equations, so drop last; @FOR( STATE( J)| J #LT# @SIZE( STATE): SPROB( J) = @SUM( SXS( I, J): SPROB( I) * TPROB( I, J)) ); ! The steady state probabilities must sum to 1; @SUM( STATE: SPROB) = 1; ! Check the input data, warn the user if the sum of probabilities in a row does not equal 1.; @FOR( STATE( I): @WARN( 'Probabilities in a row must sum to 1.', @ABS( 1 - @SUM( SXS( I, K): TPROB( I, K))) #GT# .000001); ); END MODEL: ! Markov chain model; SETS: ! There are four states in our model and over time the model will arrive at a steady state equilibrium. SPROB( J) = steady state probability; STATE/ A B C D/: SPROB; ! For each state, there's a probability of moving to each other state. TPROB( I, J) = transition probability; SXS( STATE, STATE): TPROB; ENDSETS DATA: ! The transition probabilities. These are proba- bilities of moving from one state to the next in each time periRoot EntryCONTENTS Root Entry*0_^`_e ContentsI   ! SPROB( J) = \cf1 @SUM\cf2 ( SXS( I, J): SPROB( I) * \par TPROB( I, J)) \par ); \par \par \cf3 ! The steady state probabilities must sum to 1;\cf2 \par \cf1 @SUM\cf2 ( STATE: SPROB) = 1; \par \par \cf3 ! Check the input data, warn the user if the sum of \par probabilities in a row does not equal 1.;\cf2 \par \cf1 @FOR\cf2 ( STATE( I): \par \cf1 @WARN\cf2 ( 'Probabilities in a row must sum to 1.', \par \cf1 @ABS\cf2 ( 1 - \cf1 @SUM\cf2 ( SXS( I, K): TPROB( I, K))) \par #GT# .000001); \par ); \par \par \cf1 END\cf2 \par \par } {\rtf1\ansi\ansicpg1252\deff0\deflang1033{\fonttbl{\f0\fnil\fcharset0 Courier New;}} {\colortbl ;\red0\green0\blue255;\red0\green0\blue0;\red0\green175\blue0;} \viewkind4\uc1\pard\cf1\f0\fs20 MODEL\cf2 : \par \cf3 ! Markov chain model;\cf2 \par \cf1 SETS\cf2 : \par \cf3 ! There are four states in our model and over time \par the model will arrive at a steady state \par equilibrium. \par SPROB( J) = steady state probability;\cf2 \par STATE/ A B C D/: SPROB; \par \par \cf3 ! For each state, there's a probability of moving \par to each other state. TPROB( I, J) = transition \par probability;\cf2 \par SXS( STATE, STATE): TPROB; \par \cf1 ENDSETS\cf2 \par \par \cf1 DATA\cf2 : \par \cf3 ! The transition probabilities. These are proba- \par bilities of moving from one state to the next in \par each time period. Our model has four states, for \par each time period there's a probability of moving \par to each of the four states. The sum of proba- \par bilities across each of the rows is 1, since the \par system either moves to a new state or remains in \par the current one.;\cf2 \par TPROB = .75 .1 .05 .1 \par .4 .2 .1 .3 \par .1 .2 .4 .3 \par .2 .2 .3 .3; \par \cf1 ENDDATA\cf2 \par \par \cf3 ! The model;\cf2 \par \cf3 ! Steady state equations;\cf2 \par \cf3 ! Only need N equations, so drop last;\cf2 \par \cf1 @FOR\cf2 ( STATE( J)| J #LT# \cf1 @SIZE\cf2 ( STATE): \par