Location>code7788 >text

Application Random Process | Final Questions Summary

Popularity:193 ℃/2025-02-07 04:19:53

1 Markov chain calculation problem

Question type:

  • Invariant distribution:\(\pi P=\pi\), find π.
  • beg\(P(X_1=3,X_2=4,X_3=2)\)
    • Original\(=\sum_{i\in S}P(X_0=i,X_i=3,\cdots) = \sum_{i\in S}\mu(i)p_{i3}\cdot(p_{34}p_{42})\)
  • ① Please\(\lim_{n\rightarrow \infty} P[X_i],i=\{1,2,3,4\}\). ② Ask\(\lim_{n\rightarrow \infty}\frac1n \sum_{j=0}^{n-1}f(X_j)\)
    • Find the unchanged distribution first\(\pi P=\pi\)
    • Then, it is not difficult to verify that X is not reliable non-period (\(p^{(1)}_{11}>0, p^{(2)}_{11}>0\)),
    • Therefore, ① The limit distribution is the unchanging distribution, ② The original form\(\sum_{i\in S} f(i) \cdot \lim_{n\rightarrow \infty}\frac1n \sum_{j=0}^{n-1}I\{X_j=i\}\),so\(=\sum_{i\in S} f(i)\pi_i\)

2 The Markov chain that is often returned

Some auxiliary definitions:

  • First arrival time: the time when you first arrive at j from i, defined as\(T_{ij}=\min_{n\ge 1}\{n: X_n=j, X_0=i\}\). If you cannot reach j, it is infinite.
  • First arrival probability: The probability of reaching j for the first time from i through n steps, defined as\(f_{ij}^{(n)}=P(T_{ij}=n|X_0=i)\) \(=P(X_n=j, X_k\neq j,1\le k\le n−1 | X_0=i)\)
  • The probability of reaching the first step in a finite step: the probability of reaching j for the first time from i through a finite step, defined as\(f_{ij}=\sum_{n=1}^\infty f_{ij}^{(n)}\)
  • Average slew time: the average time from i to i,\(\mu_i=ET_{ii}=\sum_{n=1}^\infty nf_{ii}^{(n)}\)

Repeat:

  • Regular return: Starting from state i, the probability of finally returning to state i is 1, but the time is expected to be returned\(E(T_{ii})\)It can be positive infinite (zero and permanent return).
  • Normal return: Starting from state i, the probability of finally returning to state i is 1, and the expected return time\(E(T_{ii})\)limited.
  • Regular return/normal return conditions:
    • Repeated status:
      • \(f_{ii}=\sum_{n=1}^\infty f_{ii}^{(n)}=1\), that is, you can always return to state i.
      • \(G_{ii} = \sum_{n=0}^\infty p_{ii}^{(n)}=1/(1-f_{ii}) = \infty\), diverge / does not converge.
    • Normal return status:
      • \(\nu_i=1/\mu_i=1/ET_{ii}>0\),Right now\(ET_{ii}<\infty\), the expectation of return time is limited.
      • i is already in a normal state, and\(\lim_{n\rightarrow\infty} p_{ii}^{(n)} = 1/\mu_i = 1/ET_{ii} > 0\), i.e. the expectation of return time is limited.
      • \(\lim_{n\rightarrow\infty} p_{ii}^{(n)} = 1/ET_{ii}\): Because in the long run, the frequency at which state i is accessed is inversely proportional to its expected return time.
    • Zero normal return status:
      • \(\nu_i=1/\mu_i=1/ET_{ii}=0\),Right now\(ET_{ii}=\infty\), the expectation of returning time is infinite.
      • i is already in a normal state, and\(\lim_{n\rightarrow\infty} p_{ii}^{(n)}=0\), that is, the expectation of returning time is infinite.

Question type:

  • 0 →1 → … → n → 0, use\(f_{00}=\sum f_{00}^{(n)} = 1\), write out the track that eventually returns 0 + Markov chains are obviously unreducible.
  • Random wandering often returns [remembering].

3 Markov chains with continuous time parameters

Necessary definitions:

  • Q matrix:
    • \(q_i = -q_{ii}\), indicating the rate at which state i is transferred out.\(q_{ii}\)is a number less than or equal to 0, because if X = i at time t, X at time t + Δt will definitely move outward (less than or equal to 0), rather than continuing to move to X (greater than 0). Therefore, the diagonal elements of the Q matrix should be negative.
    • The Q matrix adds up to 0. Each row of the discrete Markov P matrix adds up = 1.
    • \(P[\inf t,~X_t\neq X_0|X_0=i] = e^{-q_it}\), which is the distribution of the time transferred away; the expected length of time remaining at X = i is\(1/q_i\), the expected probability of retained is exponential distribution\(1-e^{-q_it}\), if X ≠ i , then the probability of transferring state j is\(q_{ij}/q_i\)
  • K forward equation:\(P'(t)=P(t)Q\)(P is in front), backward equation:\(P'(t)=QP(t)\)(P is behind).

Question type:

  • Prove the K to the forward equation:

  • \[\lim_{h\rightarrow 0} \frac{P(t+h)-P(t)}h = \lim_{h\rightarrow 0} \frac{P(t)[P(h)-I]}h \\ = P(t) \lim_{h\rightarrow 0}\frac{P(h)-I}h=P(t)Q \]

  • Prove the K backward equation:

  • \[\lim_{h\rightarrow 0} \frac{P(t+h)-P(t)}h = \lim_{h\rightarrow 0} \frac{[P(h)-I]P(t)}h \\ = \lim_{h\rightarrow 0}\frac{P(h)-I}h P(t) = QP(t) \]

  • Prove (as if it is arbitrary) Markov chain of continuous time parameters Consistent continuous: [Remembered]

    • Conclusions that need to be proved:\(P_{ij}(t+h) - P_{ij}(t)\le 1-P_{ii}(h), ~ P_{ij}(t+h) - P_{ij}(t)\ge -[1-P_{ii}(h)]\)

    • Certificate ≤:

    • \[P_{ij}(t+h) - P_{ij}(t) = \sum_{k\in S} P_{ik}(h)P_{kj}(t) - P_{ij}(t) \\ =\sum_{k\neq i} P_{ik}(h)P_{kj}(t) - P_{ij}(t)[1-P_{ii}(h)] \]

    • At this time, the first item is ≥ 0, and the second item is also ≥ 0. If you let go of the second term, the formula will become larger; if you let go of the first term, the formula will become smaller.

    • Let go of the second item: Original\(\le \sum_{k\neq i} P_{ik}(h)P_{kj}(t) \ge \sum_{k\neq i} P_{ik}(h) = 1-P_{ii}(h)\), obtain the evidence.

    • Certificate ≥:

    • Let go of the first item: the original form\(\ge - P_{ij}(t)[1-P_{ii}(h)]\ge - 1-P_{ii}(h)\), obtain the evidence.

  • definition\(\tau_i=\inf\{t:t>-,X_t\neq X_0\}\), that is, in the state\(X_0\)The time to stay. have\(P[\tau_1 > t|X_0=0]=e^{-\lambda t}\)\(P[\tau_1 > t|X_0=1]=e^{-\mu t}\)(Status Space\(S=\{0,1\}\)). Write the K-forward equation and solve it to get the expression.

    • Lide $q_0=\lambda, q_1=\mu $. Write the K-forward equation:

    • \[\left[\begin{matrix} p'_{00}(t) & p'_{01}(t) \\ p'_{10}(t) & p'_{11}(t) \end{matrix}\right] = \left[\begin{matrix} p_{00}(t) & p_{01}(t) \\ p_{10}(t) & p_{11}(t) \end{matrix}\right] \cdot\left[\begin{matrix} -\lambda & \lambda \\ \mu & -\mu \end{matrix}\right] \]

    • Then expand and solve the differential equation. Make a replacement,\(p_{01}(t) = 1-p_{00}(t)\)\(p_{10}(t) = 1-p_{11}(t)\). The final solution is:

    • \[p_{00}(t) = \frac{\lambda }{\lambda +\mu}e^{-(\lambda +\mu)t} + \frac{\mu }{\lambda +\mu} \\ p_{11}(t) = \frac{\mu }{\lambda +\mu}e^{-(\lambda +\mu)t} + \frac{\lambda }{\lambda +\mu} \]

4 Poisson Process

Poisson process:

  • The Poisson process is an incremental process, writing\(\{N_t:t\ge0\}\),in\(P(N_{s+t}-N_s=k) = P(N_t=k) = \frac{(\lambda t)^k}{k!}e^{-\lambda t}\);Right now,\(\sim P(\lambda t)\)

  • definition\(S_n = \inf\{t: N_t\ge n\}\)is the moment when the nth event occurs.

  • get\(S_n\)Distribution function:\(P(S_n \le t)=P(N_t\ge n) = 1-P(N_t\le n-1) = 1-\sum_{k=0}^{n-1}\frac{(\lambda t)^k}{k!}e^{-\lambda t}\)

  • \(S_n\)Probability density:\(f_{s_n}(t)=\frac{\lambda(\lambda t)^{n-1}}{(n-1)!}e^{-\lambda t}I_{(t\ge 0)}\). (Precisely find the lead, and the two sides will eliminate each other in the end only one is left)

  • Time interval for adjacent events\(X_n = S_n-S_{n-1}\),yes\(\sim 1-e^{-\lambda x}\)exponential distribution of λ. (Pokess and sufficient conditions)

Question type:

  • beg\(S_n\)The distribution function and probability density [remembered].
  • beg\((S_1,S_2)\)The combined probability density (with a square of [h,h]\((S_1,S_2)\)Circle the probability) and prove\(S_1,S_2-S_1\)independent. 【Remembered】

5 Martingale

Necessary knowledge:

  • Definition of Martingale: ①\(E|X_n|<\infty\)The absolute value is expected to be limited, ②\(E(X_{n+1}|Y_0,\cdots,Y_n) = X_n\)Martingale nature.

  • \(X_n\)It's a martingale,\(T\)It's the time to stop. )

  • Stop time theorem 1: Satisfies ①\(P(T<\infty) = 1\)The stop time is limited, ② $E(\sup_{n\ge 0}|X_{T\land n}|)<\infty $ That is, the upper bound of X_{stop time and n is limited, then\(EX_T = EX_0\). 【It seems to be commonly used】

    • Stop Time Theorem 2: Satisfies ①\(ET<\infty\)The expectation of stop time is limited, ② exists\(b<\infty\)Make\(E(|X_{n+1}-X_n|\big| X_0, \cdots, X_n)\le b\),but\(EX_T = EX_0\)

    • Stop Time Theorem 3: Satisfies ①\(P(T<\infty) = 1\)The stop time is limited, ② is a random process of martingale\(E|X_T|<\infty\)Limited absolute value, ③\(\lim_{n\rightarrow \infty}E|X_n\mathbf 1_{\{T>n\}}| = 0\)That is, the part n → infinite |Xn| sums to 0, then\(EX_T = EX_0\)

  • Wearing inequality:

    • (In the question that needs to be memorized, Martingale convergence theorem\(P\big[\lim_{n\rightarrow \infty} X_n=X_\infty\big] = 1\)Appear, just memorize it)

    • \(V^{(n)}(a,b)\)yes\(\{X_0,\cdots,X_n\}\)The number of times you wear (a,b) on the top means drilling from below a to above b.

    • like\(X_n\)about\(Y_n\)It is the lower martingale (martingale is the lower martingale), there is

    • \[E[V^{(n)}(a,b)]\le \frac{E(X_n-a)^+ - E(X_0-a)^+}{b-a} \le \frac{EX_n^+ +|a|}{b-a} \]

    • in\(a^+ = \max(a,0)\), that is ReLU()

Question type:

  • Prove the martingale with definition.
  • Martingale convergence theorem\(P\big[\lim_{n\rightarrow \infty} X_n=X_\infty\big] = 1\): Use the upper-wrapped inequality. 【Remembered】
  • For random walks, prove that the probability of going left and right is p = 1/2, and walk to the stop time of a or b\(ET=|a|b\). 【Remembered】

6 The Martingale of Brownian Movement

Brownian motion:

  • Continuous time, continuous state space.
  • \(X_t\sim N(0,t)\), satisfies normal distribution. There is incremental independence.
  • Probability density of normal distribution:\(p(x)=\frac1{\sqrt{2\pi\sigma^2}}\exp(-\frac{x^2}{2\sigma^2})\)
  • Expectations and covariance:\(EB_t=0,~E[B_sB_t] = s\land t\)
  • \(\frac1{\sqrt\lambda}B_{\lambda t},~tB(\frac 1t)\)All are Brownian sports. The proof seems to be: normal process + orbital continuity + expectation and covariance.

Question type:

  • Joint probability density of Brownian motion: using incremental independence.

    • Brownian motion addition: directly say that the normal distribution is satisfied, and find the expected (0) variance (using covariance).
    • Joint probability density: first write the joint distribution of each independent increment, and then variable substitution. I don't know how to do matrix.
  • Prove the martingale of Brownian motion (a continuous martingale, not discrete martingale):

    • \(B_t\)Obviously, incremental independence is used.
    • \(B_t^2-t\), just write it according to the definition of martingale. To change it\(E(X_{t_{n+1}}|B_{t_1},\cdots,B_{t_n})\), the subscript is continuous time.
    • \(\exp(\lambda B_t-\lambda^2t/2)\)
      • Verification absolute value is limited: Use the probability density of a normal distribution, integral + write open, find = 1.
      • Verify the martingality: Write it open, you need to use it\(E[\exp(\lambda B_t)]=\exp(\lambda^2t/2)\)Conclusion. 【Memorize】 (The derivation method is points + write open)
  • Brownian motion pause:

    • Study examples and recite. Ordinary Brownian motion, Brownian motion with drift. 【Remembered】
    • Brownian motion with drift\(X(t)=B(t)+\mu t\)Commonly used martingale:\(V(t)=\exp(-2\mu X(t))\). 【Remembered】
  • Mean Square Convergence:

  • Given\(t>0,~\forall 0=t_0<t_1<\cdots<t_n=t\),remember\(\lambda=\max\{t_{i+1}-t_i\}\),prove

  • \[\lim_{\lambda \rightarrow 0}\left| \sum_{i=0}^{n-1} (B_{t_{i+1}}-B_{t_i})^2-t\right|^2 = 0 \]

  • Proof: Just recite the 91 lay Buddhists and zoom directly.

    • First, variable substitution,\(X_i=B_{t_{i}}-B_{t_{i-1}}\). have\(E(X_i)^2 = t_i-t_{i-1},~E(X_i)^4 = 3(t_i-t_{i-1})^2\)
    • Then write it open, there will be a\(\sum_{i=1}^n 3(t_i-t_{i-1})^2\)And one\(\sum_{i<j} (t_i-t_{i-1})(t_j-t_{j-1})\), put forward one of the former (with a coefficient of 3), square it with the latter, take the remaining two and let go of λ, and put it into\(2\lambda t\rightarrow 0\)

7 Ito formula

A video that seems to be very clear about Ito formula:/video/BV1qZ4y1V7oR/

Ito formula:

\[Y(t) = f(t,x(t)) = f(t,B_t) \\ dY = \frac{\partial f}{\partial t} dt + \frac{\partial f}{\partial x} dB_t + \frac12\frac{\partial^2 f}{\partial x^2} dt \]

Question type 1: Proof question, direct structure\(f(t,B_t)\)

Question Type 2: Ask\(X_t = \int_0^t f(s)B_sds\)probability distribution.

  • first,\(X_t\)is a normal distribution, so μ and σ² are calculated.

  • \(EX_t = 0\), because Fubini theorem (?) directly writes out the integral form, and then = 0.

  • \(EX_t^2\)

  • \[EX_t^2 = EX_tX_t = \int_0^t\int_0^t f(u)f(v) EB_uB_v dudv \\ = \int_0^t\int_0^t f(u)f(v) (u\land v) dudv \\ = \int_0^t f(u)du \left[\int_0^u vf(v)dv + \int_u^t uf(v)dv\right] \]

  • \(EB_uB_v = u\land v\), because of independent increments. We assume\(u<v\),so\(EB_uB_v = E[B_u^2 + B_u(B_v-B_u)]\), the first item is u by definition, and the second item is based on independent incremental properties,\(B_u\)and\(B_v-B_u\)are independent of each other, their expectations are = 0, so the second term = 0.