Consider a Bayesian network with a large number of discrete variables. Which of the following techniques is most likely to improve the efficiency of exact inference?
Using a junction tree algorithm
Approximating the network with a smaller one
Applying belief propagation
All of the above
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
Dynamic Bayesian networks are often used to model sequential data. Which of the following challenges is unique to inference in dynamic Bayesian networks compared to static Bayesian networks?
The large number of variables
The temporal dependencies between variables
The complexity of the network structure
All of the above
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
Which of the following is a drawback of Gibbs sampling compared to Metropolis-Hastings?
Gibbs sampling is more computationally expensive.
Gibbs sampling can get stuck in local optima.
Gibbs sampling cannot handle continuous variables.
Gibbs sampling cannot handle cyclic graphs.
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
In variable elimination, the order in which variables are eliminated can significantly affect the computational complexity. Which of the following heuristics is least effective in minimizing the size of intermediate factors?
Minimum Fill
Minimum Degree
Maximum Likelihood
Random Elimination
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
Consider a Bayesian network with nodes A, B, C, and D. If A is conditionally independent of D given B and C, which of the following equations holds true?
P(A|B,C,D) = P(A|B,C)
P(A|D) = P(A|B,C)
P(A,D|B,C) = P(A|B,C) * P(D|B,C)
P(A,D) = P(A|B) * P(D|C)
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
In a Markov chain, the conditional independence assumption states that:

The future is independent of the past given the present.
The past is independent of the future given the present.
The present is independent of the past and future.
The past, present, and future are all dependent on each other.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
A Markov chain is said to be aperiodic if:

It has a periodic structure.
It does not have a periodic structure.
It is irreducible.
It is transient.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
A Markov chain is said to be irreducible if:

It is possible to reach any state from any other state.
It is not possible to reach any state from any other state.
It has a periodic structure.
It has a transient state.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
Markov decision processes (MDPs) involve:
States, actions, rewards, and transition probabilities.
Only states and transition probabilities.
Only states and actions.
Only states and rewards.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
Bayesian networks allow compact specification of
Joint probability distributions
Propositional Logic statements
Belief
Conditional independence
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
When applying variable elimination to a Bayesian network, which of the following strategies generally leads to the lowest computational cost?
Eliminate nodes with the lowest degree first.
Randomly eliminate nodes.
Eliminate leaf nodes first.
Eliminate nodes with the highest degree first.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
Consider using rejection sampling on a Bayesian network. As the number of evidence variables increases, the efficiency of rejection sampling:
Increases linearly.
Decreases linearly.
Decreases exponentially.
Remains unchanged.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
When using the likelihood weighting method for sampling in Bayesian networks, how does the incorporation of more evidence variables generally affect the variance of the weights?
It remains unchanged.
It decreases.
It increases.
It becomes zero.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
Given an HMM with states S1 and S2, and the following transition probabilities: P(S1|S1) = 0.7, P(S2|S1) = 0.3, P(S1|S2) = 0.4, P(S2|S2) = 0.6, what is the probability of transitioning from S1 to S2 and then back to S1? (Upto 2 decimals)
0.18
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.00
Which of the following is NOT a component of an HMM?
A set of hidden states
A set of observable states
Transition probabilities
Emission probabilities
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66