Consider a dataset with the following information:

Using the Naive Bayes classifier, what will be the predicted class for a new instance where X1 = Sunny and X2 = Cold?

Yes
No
Cannot be determined
Both A and B
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
8
10
13
15
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
10
11
12
13
Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33

Ridge regression adds a penalty to the linear regression cost function to prevent overfitting. Which of the following correctly describes the effect of increasing the regularization parameter λ in ridge regression?

It increases the model complexity.

It decreases the model complexity.

It has no effect on the model complexity.

It increases the variance of the model.

Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33

Which of the following is a regression problem?

Predicting whether a student will pass or fail based on study hours.

Predicting the price of a house based on its area and location.

Classifying emails as spam or not spam.

Determining the species of a flower based on its petal measurements.

Difficulty Level: 1
Positive Marks: 1.00
Negative Marks: 0.33
The intercept of the regression line.
The slope of the regression line.
The error term.
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

Compared to ordinary least squares (OLS) regression, ridge regression:

Always produces the same coefficient estimates as OLS

Produces coefficient estimates that are typically larger than OLS

Shrinks the coefficients towards zero but does not set them exactly to zero

Eliminates some of the features by setting their coefficients exactly to zero

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

The Naive Bayes classifier assumes that the features are:

Dependent

Independent given the class

Correlated

Always normally distributed

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

In ridge regression, the penalty term added to the cost function is:

The sum of the absolute values of the coefficients.

The sum of the squared coefficients.

The product of the coefficients.

The difference between the predicted and actual values.

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

You are using a Naive Bayes classifier for spam detection. Given the following dataset:

Spam, Posterior Probability: 0.80

Not Spam, Posterior Probability: 0.65

Spam, Posterior Probability: 0.75

Not Spam, Posterior Probability: 0.85

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

In ridge regression, you are given a dataset where the multicollinearity between the predictors is high. Which of the following statements best describes the impact of ridge regression on the bias-variance tradeoff compared to ordinary least squares (OLS) regression?

Ridge regression increases bias but significantly reduces variance.

Ridge regression reduces both bias and variance

Ridge regression increases variance but reduces bias.

Ridge regression maintains the same bias and variance as OLS.

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

High variance in the estimated coefficients, can be mitigated by removing one of the correlated predictors

Low variance in the estimated coefficients, can be mitigated by adding more predictors

Bias in the estimated coefficients, can be mitigated by centering the predictors.

Decrease in model complexity, can be mitigated by increasing the sample size.

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66
You are working with a large dataset with many irrelevant features. You decide to use Lasso regression, which performs both regularization and feature selection. Given a Lasso regression model with a large regularization parameter λ which of the following is most likely to happen?

All features will be retained, but their coefficients will be shrunk towards zero.

Some features will be completely eliminated (coefficients set to zero), leading to a simpler model.

All coefficients will be set to zero, making the model unusable.

The model will retain only the most correlated features, but with no penalty on their coefficients.

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66

Which of the following is a key assumption made by the Naive Bayes classifier?

The features are dependent given the class.

The features are independent given the class.

The features have equal variances.

The features are normally distributed.

Difficulty Level: 1
Positive Marks: 2.00
Negative Marks: 0.66