πŸ“š

Β >Β 

πŸ“ˆΒ 

Β >Β 

πŸ“

4.14 Matrices Modeling Contexts

1 min readβ€’june 18, 2024

Jesse

Jesse

Jesse

Jesse

4.14 Matrices Modeling Contexts

Constructing Models Using Provided Context

A contextual scenario can indicate the rate of transitions between states as percent changes. A matrix can be constructed based on these rates to model how states change over discrete intervals. πŸš€

Example: Markov Chains

One example of this is in the field of Markov Chain, which is a mathematical model that describes a system that can be in different states, and the probability of transitioning between those states over time. ⛓️

To construct a model of a scenario involving transitions between two states using matrices, we first need to define the states and the rates of transition between them. Let's say we have a scenario involving two states: state A and state B. We will also assume that the scenario is a discrete-time process, meaning that it can only take place at specific time intervals. ↔️

We can represent the rates of transition between the two states as a matrix, where the entries of the matrix represent the probability of transitioning from one state to another. The matrix is known as the transition matrix. The transition matrix for our scenario can be represented as:

T = [p(A->A) p(A->B); p(B->A) p(B->B)]

Where,

  • p(A->A) is the probability of staying in state A
  • p(A->B) is the probability of transitioning from state A to state B
  • p(B->A) is the probability of transitioning from state B to state A, and
  • p(B->B) is the probability of staying in state B.

To find the probabilities of the system being in a specific state at a specific time, we can raise the transition matrix to a power, where the power is equal to the number of time intervals. For example, if we want to find the probability of the system being in state A after two time intervals, we would raise the transition matrix to the power of 2:

T^2 = [p(A->A)^2 + p(A->B)*p(B->A) p(A->A)*p(A->B) + p(A->B)^2; p(B->A)*p(A->A) + p(B->B)*p(B->A) p(B->A)*p(A->B) + p(B->B)^2]

This matrix represents the probability of being in state A or state B after two time intervals. By raising the matrix to higher powers, we can find the probability of the system being in a specific state after more time intervals. πŸ“₯

It's worth noting that for a Markov Chain, the sum of the entries of the transition matrix should always be 1 and all the entries should be non-negative. 1️⃣

1_JqHrvq8P7vEet2fAi5Hg3Q.png

Source: Towards Data Science

Moving Further with Modeling

Predicting Future States

The product of a matrix that models transitions between states and a corresponding state vector can be used to predict future states. πŸ“‘

A state vector is a column vector that contains the probabilities of the system being in a specific state at a specific time. By multiplying the state vector by the matrix that models the transitions between states, we can calculate the probability of the system being in a specific state at a future time. #️⃣

For example, consider a 2x2 matrix T that models the transitions between states A and B, and a state vector X that represents the probability of being in state A and state B at a specific time. If we want to find the probability of the system being in state A and state B at the next time interval, we can calculate the product T*X.

Predicting Steady States

Another important concept in this scenario is the steady state, which is a distribution between states that does not change from one step to the next. 🎁

This can be found by repeated multiplication of the matrix that models the transitions between states and the corresponding resultant state vectors. By continuing to multiply the matrix and the state vector, we will eventually reach a state vector where the values do not change from one multiplication to the next. πŸ‘Œ This state vector represents the steady state, or the long-term distribution of the system between states.

y3nfM.png

Source: Math Stack Exchange

Predicting Past States

Lastly, the product of the inverse of a matrix that models transitions between states and a corresponding state vector can predict past states. πŸ”₯

The inverse of a matrix, when it exists, is a unique matrix that when multiplied by the original matrix, results in the identity matrix. By multiplying the inverse of the transition matrix by the current state vector, we can calculate the state vector at a previous time step. 🌐

For example, if we have a 2x2 matrix T that models the transitions between states A and B, and a state vector X that represents the probability of being in state A and state B at a specific time. If we want to find the probability of the system being in state A and state B at the previous time interval, we can calculate the product T^-1 * X.

<< Hide Menu

πŸ“š

Β >Β 

πŸ“ˆΒ 

Β >Β 

πŸ“

4.14 Matrices Modeling Contexts

1 min readβ€’june 18, 2024

Jesse

Jesse

Jesse

Jesse

4.14 Matrices Modeling Contexts

Constructing Models Using Provided Context

A contextual scenario can indicate the rate of transitions between states as percent changes. A matrix can be constructed based on these rates to model how states change over discrete intervals. πŸš€

Example: Markov Chains

One example of this is in the field of Markov Chain, which is a mathematical model that describes a system that can be in different states, and the probability of transitioning between those states over time. ⛓️

To construct a model of a scenario involving transitions between two states using matrices, we first need to define the states and the rates of transition between them. Let's say we have a scenario involving two states: state A and state B. We will also assume that the scenario is a discrete-time process, meaning that it can only take place at specific time intervals. ↔️

We can represent the rates of transition between the two states as a matrix, where the entries of the matrix represent the probability of transitioning from one state to another. The matrix is known as the transition matrix. The transition matrix for our scenario can be represented as:

T = [p(A->A) p(A->B); p(B->A) p(B->B)]

Where,

  • p(A->A) is the probability of staying in state A
  • p(A->B) is the probability of transitioning from state A to state B
  • p(B->A) is the probability of transitioning from state B to state A, and
  • p(B->B) is the probability of staying in state B.

To find the probabilities of the system being in a specific state at a specific time, we can raise the transition matrix to a power, where the power is equal to the number of time intervals. For example, if we want to find the probability of the system being in state A after two time intervals, we would raise the transition matrix to the power of 2:

T^2 = [p(A->A)^2 + p(A->B)*p(B->A) p(A->A)*p(A->B) + p(A->B)^2; p(B->A)*p(A->A) + p(B->B)*p(B->A) p(B->A)*p(A->B) + p(B->B)^2]

This matrix represents the probability of being in state A or state B after two time intervals. By raising the matrix to higher powers, we can find the probability of the system being in a specific state after more time intervals. πŸ“₯

It's worth noting that for a Markov Chain, the sum of the entries of the transition matrix should always be 1 and all the entries should be non-negative. 1️⃣

1_JqHrvq8P7vEet2fAi5Hg3Q.png

Source: Towards Data Science

Moving Further with Modeling

Predicting Future States

The product of a matrix that models transitions between states and a corresponding state vector can be used to predict future states. πŸ“‘

A state vector is a column vector that contains the probabilities of the system being in a specific state at a specific time. By multiplying the state vector by the matrix that models the transitions between states, we can calculate the probability of the system being in a specific state at a future time. #️⃣

For example, consider a 2x2 matrix T that models the transitions between states A and B, and a state vector X that represents the probability of being in state A and state B at a specific time. If we want to find the probability of the system being in state A and state B at the next time interval, we can calculate the product T*X.

Predicting Steady States

Another important concept in this scenario is the steady state, which is a distribution between states that does not change from one step to the next. 🎁

This can be found by repeated multiplication of the matrix that models the transitions between states and the corresponding resultant state vectors. By continuing to multiply the matrix and the state vector, we will eventually reach a state vector where the values do not change from one multiplication to the next. πŸ‘Œ This state vector represents the steady state, or the long-term distribution of the system between states.

y3nfM.png

Source: Math Stack Exchange

Predicting Past States

Lastly, the product of the inverse of a matrix that models transitions between states and a corresponding state vector can predict past states. πŸ”₯

The inverse of a matrix, when it exists, is a unique matrix that when multiplied by the original matrix, results in the identity matrix. By multiplying the inverse of the transition matrix by the current state vector, we can calculate the state vector at a previous time step. 🌐

For example, if we have a 2x2 matrix T that models the transitions between states A and B, and a state vector X that represents the probability of being in state A and state B at a specific time. If we want to find the probability of the system being in state A and state B at the previous time interval, we can calculate the product T^-1 * X.