dataset
stringclasses 1
value | id
stringlengths 11
13
| question
stringlengths 64
1.92k
| choices
stringlengths 12
727
| answer
stringclasses 4
values |
|---|---|---|---|---|
mcq_train
|
mcq_train_1
|
Question: Let $n$ be an integer such that $n\geq 2$ and let $A \in \R^{n imes n}$, and $xv \in \R^n$, consider the function $f(xv) = xv^ op A xv$ defined over $\R^n$. Which of the following is the gradient of the function $f$?
Options: $2 xv^ op A$, $2Axv$, $A^ op xv + Axv$, $2A^ op xv$
|
$2 xv^ op A$, $2Axv$, $A^ op xv + Axv$, $2A^ op xv$
|
C
|
mcq_train
|
mcq_train_2
|
Question: Consider a matrix factorization problem of the form $\mathbf{X}=\mathbf{W Z}^{\top}$ to obtain an item-user recommender system where $x_{i j}$ denotes the rating given by $j^{\text {th }}$ user to the $i^{\text {th }}$ item . We use Root mean square error (RMSE) to gauge the quality of the factorization obtained. Select the correct option.
Options: Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from scratch to generate robust ratings for the user-item pairs containing this item., Regularization terms for $\mathbf{W}$ and $\mathbf{Z}$ in the form of their respective Frobenius norms are added to the RMSE so that the resulting objective function becomes convex., For obtaining a robust factorization of a matrix $\mathbf{X}$ with $D$ rows and $N$ elements where $N \ll D$, the latent dimension $\mathrm{K}$ should lie somewhere between $D$ and $N$., None of the other options are correct.
|
Given a new item and a few ratings from existing users, we need to retrain the already trained recommender system from scratch to generate robust ratings for the user-item pairs containing this item., Regularization terms for $\mathbf{W}$ and $\mathbf{Z}$ in the form of their respective Frobenius norms are added to the RMSE so that the resulting objective function becomes convex., For obtaining a robust factorization of a matrix $\mathbf{X}$ with $D$ rows and $N$ elements where $N \ll D$, the latent dimension $\mathrm{K}$ should lie somewhere between $D$ and $N$., None of the other options are correct.
|
D
|
mcq_train
|
mcq_train_3
|
Question: Consider the loss function $L: \R^d o \R$, $L(\wv) = rac{eta}{2}\|\wv\|^2$, where $eta > 0$ is a constant. We run gradient descent on $L$ with a stepsize $\gamma > 0$ starting from some $\wv_0
eq 0$. Which of the statements below is true?
Options: Gradient descent converges to the global minimum for any stepsize $\gamma > 0$., Gradient descent with stepsize $\gamma = rac{2}{eta}$ produces iterates that diverge to infinity ($\|\wv_t\| o \infty$ as $t o \infty$)., Gradient descent converges in two steps for $\gamma = rac{1}{eta}$ (i.e., $\wv_2$ is the extbf{first} iterate attaining the global minimum of $L$)., Gradient descent converges to the global minimum for any stepsize in the interval $\gamma \in ig( 0, rac{2}{eta}ig)$.
|
Gradient descent converges to the global minimum for any stepsize $\gamma > 0$., Gradient descent with stepsize $\gamma = rac{2}{eta}$ produces iterates that diverge to infinity ($\|\wv_t\| o \infty$ as $t o \infty$)., Gradient descent converges in two steps for $\gamma = rac{1}{eta}$ (i.e., $\wv_2$ is the extbf{first} iterate attaining the global minimum of $L$)., Gradient descent converges to the global minimum for any stepsize in the interval $\gamma \in ig( 0, rac{2}{eta}ig)$.
|
D
|
mcq_train
|
mcq_train_4
|
Question: You are doing your ML project. It is a regression task under a square loss. Your neighbor uses linear regression and least squares. You are smarter. You are using a neural net with 10 layers and activations functions $f(x)=3 x$. You have a powerful laptop but not a supercomputer. You are betting your neighbor a beer at Satellite who will have a substantially better scores. However, at the end it will essentially be a tie, so we decide to have two beers and both pay. What is the reason for the outcome of this bet?
Options: Because we use exactly the same scheme., Because it is almost impossible to train a network with 10 layers without a supercomputer., Because I should have used more layers., Because I should have used only one layer.
|
Because we use exactly the same scheme., Because it is almost impossible to train a network with 10 layers without a supercomputer., Because I should have used more layers., Because I should have used only one layer.
|
A
|
mcq_train
|
mcq_train_5
|
Question: Let $f:\R^D
ightarrow\R$ be an $L$-hidden layer multi-layer perceptron (MLP) such that
\[
f(xv)=\sigma_{L+1}ig(\wv^ op\sigma_L(\Wm_L\sigma_{L-1}(\Wm_{L-1}\dots\sigma_1(\Wm_1xv)))ig),
\]
with $\wv\in\R^{M}$, $\Wm_1\in\R^{M imes D}$ and $\Wm_\ell\in\R^{M imes M}$ for $\ell=2,\dots, L$, and $\sigma_i$ for $i=1,\dots,L+1$ is an entry-wise activation function. For any MLP $f$ and a classification threshold $ au$ let $C_{f, au}$ be a binary classifier that outputs YES for a given input $xv$ if $f(xv) \leq au$ and NO otherwise. space{3mm}
Assume $\sigma_{L+1}$ is the element-wise extbf{sigmoid} function and $C_{f, rac{1}{2}}$ is able to obtain a high accuracy on a given binary classification task $T$. Let $g$ be the MLP obtained by multiplying the parameters extbf{in the last layer} of $f$, i.e. $\wv$, by 2. Moreover, let $h$ be the MLP obtained by replacing $\sigma_{L+1}$ with element-wise extbf{ReLU}. Finally, let $q$ be the MLP obtained by doing both of these actions. Which of the following is true?
ReLU(x) = max\{x, 0\} \
Sigmoid(x) = rac{1}{1 + e^{-x}}
Options: $C_{g, rac{1}{2}}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{h, 0}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{q, 0}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{g, rac{1}{2}}$, $C_{h, 0}$, and $C_{q, 0}$ have the same accuracy as $C_{f, rac{1}{2}}$ on $T$
|
$C_{g, rac{1}{2}}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{h, 0}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{q, 0}$ may have an accuracy significantly lower than $C_{f, rac{1}{2}}$ on $T$, $C_{g, rac{1}{2}}$, $C_{h, 0}$, and $C_{q, 0}$ have the same accuracy as $C_{f, rac{1}{2}}$ on $T$
|
D
|
mcq_train
|
mcq_train_6
|
Question: The inverse document frequency of a term can increase
Options: by adding the term to a document that contains the term, by removing a document from the document collection that does not contain the term, by adding a document to the document collection that contains the term, by adding a document to the document collection that does not contain the term
|
by adding the term to a document that contains the term, by removing a document from the document collection that does not contain the term, by adding a document to the document collection that contains the term, by adding a document to the document collection that does not contain the term
|
D
|
mcq_train
|
mcq_train_7
|
Question: Maintaining the order of document identifiers for vocabulary construction when partitioning the document collection is important
Options: in the index merging approach for single node machines, in the map-reduce approach for parallel clusters, in both, in neither of the two
|
in the index merging approach for single node machines, in the map-reduce approach for parallel clusters, in both, in neither of the two
|
A
|
mcq_train
|
mcq_train_8
|
Question: Which of the following is correct regarding Crowdsourcing?
Options: Random Spammers give always the same answer for every question, It is applicable only for binary classification problems, Honey Pot discovers all the types of spammers but not the sloppy workers, The output of Majority Decision can be equal to the one of Expectation-Maximization
|
Random Spammers give always the same answer for every question, It is applicable only for binary classification problems, Honey Pot discovers all the types of spammers but not the sloppy workers, The output of Majority Decision can be equal to the one of Expectation-Maximization
|
D
|
mcq_train
|
mcq_train_9
|
Question: How does LSI querying work?
Options: The query vector is treated as an additional term; then cosine similarity is computed, The query vector is transformed by Matrix S; then cosine similarity is computed, The query vector is treated as an additional document; then cosine similarity is computed, The query vector is multiplied with an orthonormal matrix; then cosine similarity is computed
|
The query vector is treated as an additional term; then cosine similarity is computed, The query vector is transformed by Matrix S; then cosine similarity is computed, The query vector is treated as an additional document; then cosine similarity is computed, The query vector is multiplied with an orthonormal matrix; then cosine similarity is computed
|
C
|
mcq_train
|
mcq_train_10
|
Question: In a Ranked Retrieval result, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true (P@k and R@k are the precision and recall of the result set consisting of the k top ranked documents)?
Options: P@k-1 > P@k+1, P@k-1 = P@k+1, R@k-1 < R@k+, R@k-1 = R@k+1
|
P@k-1 > P@k+1, P@k-1 = P@k+1, R@k-1 < R@k+, R@k-1 = R@k+1
|
C
|
mcq_train
|
mcq_train_11
|
Question: The term frequency of a term is normalized
Options: by the maximal frequency of all terms in the document, by the maximal frequency of the term in the document collection, by the maximal frequency of any term in the vocabulary, by the maximal term frequency of any document in the collection
|
by the maximal frequency of all terms in the document, by the maximal frequency of the term in the document collection, by the maximal frequency of any term in the vocabulary, by the maximal term frequency of any document in the collection
|
A
|
mcq_train
|
mcq_train_12
|
Question: When compressing the adjacency list of a given URL, a reference list
Options: Is chosen from neighboring URLs that can be reached in a small number of hops, May contain URLs not occurring in the adjacency list of the given URL, Lists all URLs not contained in the adjacency list of given URL, All of the above
|
Is chosen from neighboring URLs that can be reached in a small number of hops, May contain URLs not occurring in the adjacency list of the given URL, Lists all URLs not contained in the adjacency list of given URL, All of the above
|
B
|
mcq_train
|
mcq_train_13
|
Question: In the χ2 statistics for a binary feature, we obtain P(χ2 | DF = 1) > 0.05. This means in this case, it is assumed:
Options: That the class labels depends on the feature, That the class label is independent of the feature, That the class label correlates with the feature, None of the above
|
That the class labels depends on the feature, That the class label is independent of the feature, That the class label correlates with the feature, None of the above
|
B
|
mcq_train
|
mcq_train_14
|
Question: 10 itemsets out of 100 contain item A, of which 5 also contain B. The rule A -> B has:
Options: 5% support and 10% confidence, 10% support and 50% confidence, 5% support and 50% confidence, 10% support and 10% confidence
|
5% support and 10% confidence, 10% support and 50% confidence, 5% support and 50% confidence, 10% support and 10% confidence
|
C
|
mcq_train
|
mcq_train_15
|
Question: A basic statement in RDF would be expressed in the relational data model by a table
Options: with one attribute, with two attributes, with three attributes, cannot be expressed in the relational data model
|
with one attribute, with two attributes, with three attributes, cannot be expressed in the relational data model
|
B
|
mcq_train
|
mcq_train_16
|
Question: Which of the following statements is wrong regarding RDF?
Options: An RDF statement would be expressed in SQL as a tuple in a table, Blank nodes in RDF graphs correspond to the special value NULL in SQL, The object value of a type statement corresponds to a table name in SQL, RDF graphs can be encoded as SQL databases
|
An RDF statement would be expressed in SQL as a tuple in a table, Blank nodes in RDF graphs correspond to the special value NULL in SQL, The object value of a type statement corresponds to a table name in SQL, RDF graphs can be encoded as SQL databases
|
B
|
mcq_train
|
mcq_train_17
|
Question: What is TRUE regarding Fagin's algorithm?
Options: Posting files need to be indexed by TF-IDF weights, It performs a complete scan over the posting files, It never reads more than (kn)1⁄2 entries from a posting list, It provably returns the k documents with the largest aggregate scores
|
Posting files need to be indexed by TF-IDF weights, It performs a complete scan over the posting files, It never reads more than (kn)1⁄2 entries from a posting list, It provably returns the k documents with the largest aggregate scores
|
D
|
mcq_train
|
mcq_train_18
|
Question: A false negative in sampling can only occur for itemsets with support smaller than
Options: the threshold s, p*s, p*m, None of the above
|
the threshold s, p*s, p*m, None of the above
|
D
|
mcq_train
|
mcq_train_19
|
Question: Why is XML a document model?
Options: It supports application-specific markup, It supports domain-specific schemas, It has a serialized representation, It uses HTML tags
|
It supports application-specific markup, It supports domain-specific schemas, It has a serialized representation, It uses HTML tags
|
C
|
mcq_train
|
mcq_train_20
|
Question: A retrieval model attempts to capture
Options: the interface by which a user is accessing information, the importance a user gives to a piece of information for a query, the formal correctness of a query formulation by user, the structure by which a document is organised
|
the interface by which a user is accessing information, the importance a user gives to a piece of information for a query, the formal correctness of a query formulation by user, the structure by which a document is organised
|
B
|
mcq_train
|
mcq_train_21
|
Question: When computing HITS, the initial values
Options: Are set all to 1, Are set all to 1/n, Are set all to 1/sqrt(n), Are chosen randomly
|
Are set all to 1, Are set all to 1/n, Are set all to 1/sqrt(n), Are chosen randomly
|
A
|
mcq_train
|
mcq_train_22
|
Question: When indexing a document collection using an inverted file, the main space requirement is implied by
Options: The access structure, The vocabulary, The index file, The postings file
|
The access structure, The vocabulary, The index file, The postings file
|
D
|
mcq_train
|
mcq_train_23
|
Question: For a user that has not done any ratings, which method can make a prediction?
Options: User-based collaborative RS, Item-based collaborative RS, Content-based RS, None of the above
|
User-based collaborative RS, Item-based collaborative RS, Content-based RS, None of the above
|
D
|
mcq_train
|
mcq_train_24
|
Question: Which statement is correct?
Options: The Viterbi algorithm works because words are independent in a sentence, The Viterbi algorithm works because it is applied to an HMM model that makes an independence assumption on the word dependencies in sentences, The Viterbi algorithm works because it makes an independence assumption on the word dependencies in sentences, The Viterbi algorithm works because it is applied to an HMM model that captures independence of words in a sentence
|
The Viterbi algorithm works because words are independent in a sentence, The Viterbi algorithm works because it is applied to an HMM model that makes an independence assumption on the word dependencies in sentences, The Viterbi algorithm works because it makes an independence assumption on the word dependencies in sentences, The Viterbi algorithm works because it is applied to an HMM model that captures independence of words in a sentence
|
B
|
mcq_train
|
mcq_train_25
|
Question: Which of the following is WRONG about inverted files? (Slide 24,28 Week 3)
Options: The space requirement for the postings file is O(n), Variable length compression is used to reduce the size of the index file, The index file has space requirement of O(n^beta), where beta is about 1⁄2, Storing differences among word addresses reduces the size of the postings file
|
The space requirement for the postings file is O(n), Variable length compression is used to reduce the size of the index file, The index file has space requirement of O(n^beta), where beta is about 1⁄2, Storing differences among word addresses reduces the size of the postings file
|
B
|
mcq_train
|
mcq_train_26
|
Question: In User-Based Collaborative Filtering, which of the following is TRUE?
Options: Pearson Correlation Coefficient and Cosine Similarity have the same value range and return the same similarity ranking for the users., Pearson Correlation Coefficient and Cosine Similarity have different value ranges and can return different similarity rankings for the users, Pearson Correlation Coefficient and Cosine Similarity have different value ranges, but return the same similarity ranking for the users, Pearson Correlation Coefficient and Cosine Similarity have the same value range but can return different similarity rankings for the users
|
Pearson Correlation Coefficient and Cosine Similarity have the same value range and return the same similarity ranking for the users., Pearson Correlation Coefficient and Cosine Similarity have different value ranges and can return different similarity rankings for the users, Pearson Correlation Coefficient and Cosine Similarity have different value ranges, but return the same similarity ranking for the users, Pearson Correlation Coefficient and Cosine Similarity have the same value range but can return different similarity rankings for the users
|
B
|
mcq_train
|
mcq_train_27
|
Question: Which of the following is TRUE for Recommender Systems (RS)?
Options: The complexity of the Content-based RS depends on the number of users, Item-based RS need not only the ratings but also the item features, Matrix Factorization is typically robust to the cold-start problem., Matrix Factorization can predict a score for any user-item combination in the dataset.
|
The complexity of the Content-based RS depends on the number of users, Item-based RS need not only the ratings but also the item features, Matrix Factorization is typically robust to the cold-start problem., Matrix Factorization can predict a score for any user-item combination in the dataset.
|
D
|
mcq_train
|
mcq_train_28
|
Question: Which of the following properties is part of the RDF Schema Language?
Options: Description, Type, Predicate, Domain
|
Description, Type, Predicate, Domain
|
D
|
mcq_train
|
mcq_train_29
|
Question: What is a correct pruning strategy for decision tree induction?
Options: Apply Maximum Description Length principle, Stop partitioning a node when either positive or negative samples dominate the samples of the other class, Choose the model that maximizes L(M) + L(M|D), Remove attributes with lowest information gain
|
Apply Maximum Description Length principle, Stop partitioning a node when either positive or negative samples dominate the samples of the other class, Choose the model that maximizes L(M) + L(M|D), Remove attributes with lowest information gain
|
B
|
mcq_train
|
mcq_train_30
|
Question: In the first pass over the database of the FP Growth algorithm
Options: Frequent itemsets are extracted, A tree structure is constructed, The frequency of items is computed, Prefixes among itemsets are determined
|
Frequent itemsets are extracted, A tree structure is constructed, The frequency of items is computed, Prefixes among itemsets are determined
|
C
|
mcq_train
|
mcq_train_31
|
Question: In a FP tree, the leaf nodes are the ones with:
Options: Lowest confidence, Lowest support, Least in the alphabetical order, None of the above
|
Lowest confidence, Lowest support, Least in the alphabetical order, None of the above
|
B
|
mcq_train
|
mcq_train_32
|
Question: Considering the transaction below, which one is WRONG?
|Transaction ID |Items Bought|
|--|--|
|1|Tea|
|2|Tea, Yoghurt|
|3|Tea, Yoghurt, Kebap|
|4 |Kebap |
|5|Tea, Kebap|
Options: {Yoghurt} -> {Kebab} has 50% confidence, {Yoghurt, Kebap} has 20% support, {Tea} has the highest support, {Yoghurt} has the lowest support among all itemsets
|
{Yoghurt} -> {Kebab} has 50% confidence, {Yoghurt, Kebap} has 20% support, {Tea} has the highest support, {Yoghurt} has the lowest support among all itemsets
|
D
|
mcq_train
|
mcq_train_33
|
Question: Which is an appropriate method for fighting skewed distributions of class labels in
classification?
Options: Include an over-proportional number of samples from the larger class, Use leave-one-out cross validation, Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels, Generate artificial data points for the most frequent classes
|
Include an over-proportional number of samples from the larger class, Use leave-one-out cross validation, Construct the validation set such that the class label distribution approximately matches the global distribution of the class labels, Generate artificial data points for the most frequent classes
|
C
|
mcq_train
|
mcq_train_34
|
Question: Dude said “I like bowling”. With how many statements can we express this sentence using RDF Reification?
Options: We cannot, 1, 3, 5
|
We cannot, 1, 3, 5
|
D
|
mcq_train
|
mcq_train_35
|
Question: The type statement in RDF would be expressed in the relational data model by a table
Options: with one attribute, with two attributes, with three attributes, cannot be expressed in the relational data model
|
with one attribute, with two attributes, with three attributes, cannot be expressed in the relational data model
|
A
|
mcq_train
|
mcq_train_36
|
Question: How does matrix factorization address the issue of missing ratings?
Options: It uses regularization of the rating matrix, It performs gradient descent only for existing ratings, It sets missing ratings to zero, It maps ratings into a lower-dimensional space
|
It uses regularization of the rating matrix, It performs gradient descent only for existing ratings, It sets missing ratings to zero, It maps ratings into a lower-dimensional space
|
B
|
mcq_train
|
mcq_train_37
|
Question: When constructing a word embedding, negative samples are
Options: word - context word combinations that are not occurring in the document collection, context words that are not part of the vocabulary of the document collection, all less frequent words that do not occur in the context of a given word, only words that never appear as context word
|
word - context word combinations that are not occurring in the document collection, context words that are not part of the vocabulary of the document collection, all less frequent words that do not occur in the context of a given word, only words that never appear as context word
|
A
|
mcq_train
|
mcq_train_38
|
Question: In vector space retrieval each row of the matrix M corresponds to
Options: A document, A concept, A query, A term
|
A document, A concept, A query, A term
|
D
|
mcq_train
|
mcq_train_39
|
Question: Applying SVD to a term-document matrix M. Each concept is represented in K
Options: as a singular value, as a linear combination of terms of the vocabulary, as a linear combination of documents in the document collection, as a least squares approximation of the matrix M
|
as a singular value, as a linear combination of terms of the vocabulary, as a linear combination of documents in the document collection, as a least squares approximation of the matrix M
|
B
|
mcq_train
|
mcq_train_40
|
Question: An HMM model would not be an appropriate approach to identify
Options: Named Entities, Part-of-Speech tags, Concepts, Word n-grams
|
Named Entities, Part-of-Speech tags, Concepts, Word n-grams
|
D
|
mcq_train
|
mcq_train_41
|
Question: Which of the following is NOT an (instance-level) ontology?
Options: Wordnet, WikiData, Schema.org, Google Knowledge Graph
|
Wordnet, WikiData, Schema.org, Google Knowledge Graph
|
C
|
mcq_train
|
mcq_train_42
|
Question: We consider a classification problem on linearly separable data. Our dataset had an outlier---a point that is very far from the other datapoints in distance (and also far from margins in SVM but still correctly classified by the SVM classifier).
We trained the SVM, logistic regression and 1-nearest-neighbour models on this dataset.
We tested trained models on a test set that comes from the same distribution as training set, but doesn't have any outlier points.
For any vector $v \in \R^D$ let $\|v\|_2 := \sqrt{v_1^2 + \dots + v_D^2}$ denote the Euclidean norm. The hard-margin SVM problem for linearly separable points in $\R^D$ is to minimize the Euclidean norm $\| \wv \|_2$ under some constraints.
What are the additional constraints for this optimization problem?
Options: $y_n \ww^ op x_n \geq 1 ~ orall n \in \{1,\cdots,N\}$, $\ww^ op x_n \geq 1 ~ orall n \in\{1,\cdots,N\}$, $y_n + \ww^ op x_n \geq 1 ~ orall n \in \{1,\cdots,N\}$, $rac{y_n}{\ww^ op x_n }\geq 1 ~orall n \in \{1,\cdots,N\}$
|
$y_n \ww^ op x_n \geq 1 ~ orall n \in \{1,\cdots,N\}$, $\ww^ op x_n \geq 1 ~ orall n \in\{1,\cdots,N\}$, $y_n + \ww^ op x_n \geq 1 ~ orall n \in \{1,\cdots,N\}$, $rac{y_n}{\ww^ op x_n }\geq 1 ~orall n \in \{1,\cdots,N\}$
|
A
|
mcq_train
|
mcq_train_43
|
Question: Let $\xv, \wv, \deltav \in \R^d$, $y \in \{-1, 1\}$, and $arepsilon \in \R_{>0}$ be an arbitrary positive value. Which of the following is NOT true in general:
Options: $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} \exp(-y \wv^ op (\xv + \deltav ))$ , $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmin_{\|\deltav\|_2 \leq arepsilon} y \wv^ op (\xv + \deltav )$, $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} 1 - anh(y \wv^ op (\xv + \deltav))$ \ where $ anh(z) = rac{e^z - e^{-z}}{e^z + e^{-z}}$, $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} \mathbf{1}_{y \wv^ op (\xv + \deltav) \leq 0}$
|
$rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} \exp(-y \wv^ op (\xv + \deltav ))$ , $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmin_{\|\deltav\|_2 \leq arepsilon} y \wv^ op (\xv + \deltav )$, $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} 1 - anh(y \wv^ op (\xv + \deltav))$ \ where $ anh(z) = rac{e^z - e^{-z}}{e^z + e^{-z}}$, $rgmax_{\|\deltav\|_2 \leq arepsilon} \log_2(1 + \exp(-y \wv^ op (\xv + \deltav))) = rgmax_{\|\deltav\|_2 \leq arepsilon} \mathbf{1}_{y \wv^ op (\xv + \deltav) \leq 0}$
|
D
|
mcq_train
|
mcq_train_44
|
Question: Consider a binary classification task as in Figure~\AMCref{fig:lr_data}, which consists of 14 two-dimensional linearly separable samples (circles corresponds to label $y=1$ and pluses corresponds to label $y=0$). We would like to predict the label $y=1$ of a sample $(x_1, x_2)$ when the following holds true
\[
\prob(y=1|x_1, x_2, w_1, w_2) = rac{1}{1+\exp(-w_1x_1 -w_2x_2)} > 0.5
\]
where $w_1$ and $w_2$ are parameters of the model.
If we obtain the $(w_1, w_2)$ by optimizing the following objective
$$
- \sum_{n=1}^N\log \prob(y_n| x_{n1}, x_{n2}, w_1, w_2) + rac{C}{2} w_2^2
$$
where $C$ is very large, then the decision boundary will be close to which of the following lines?
Options: $x_1 + x_2 = 0$, $x_1 - x_2 = 0$, $x_1 = 0$, $x_2 = 0$
|
$x_1 + x_2 = 0$, $x_1 - x_2 = 0$, $x_1 = 0$, $x_2 = 0$
|
C
|
mcq_train
|
mcq_train_45
|
Question: Given a matrix $\Xm$ of shape $D imes N$ with a singular value decomposition (SVD), $X=USV^ op$, suppose $\Xm$ has rank $K$ and $\Am=\Xm\Xm^ op$.
Which one of the following statements is extbf{false}?
Options: The eigenvalues of A are the singular values of X, A is positive semi-definite, i.e all eigenvalues of A are non-negative, The eigendecomposition of A is also a singular value decomposition (SVD) of A, A vector $v$ that can be expressed as a linear combination of the last $D-K$ columns of $U$, i.e $x=\sum_{i=K+1}^{D} w_{i}u_{i}$ (where $u_{i}$ is the $i$-th column of $U$
), lies in the null space of $X^ op$
|
The eigenvalues of A are the singular values of X, A is positive semi-definite, i.e all eigenvalues of A are non-negative, The eigendecomposition of A is also a singular value decomposition (SVD) of A, A vector $v$ that can be expressed as a linear combination of the last $D-K$ columns of $U$, i.e $x=\sum_{i=K+1}^{D} w_{i}u_{i}$ (where $u_{i}$ is the $i$-th column of $U$
), lies in the null space of $X^ op$
|
A
|
mcq_train
|
mcq_train_46
|
Question: Assume we have $N$ training samples $(\xx_1, y_1), \dots, (\xx_N, y_N)$ where for each sample $i \in \{1, \dots, N\}$ we have that $\xx_i \in \R^d$ and $y_i \in \R$. For $\lambda \geq 0$, we consider the following loss:
L_{\lambda}(\ww) = rac{1}{N} \sum_{i = 1}^N (y_i - \xx_i^ op \ww)^2 + \lambda \Vert \ww \Vert_2, and let $C_\lambda = \min_{\ww \in \R^d} L_{\lambda}(\ww)$ denote the optimal loss value.
Which of the following statements is extbf{true}:
Options: For $\lambda = 0$, the loss $L_{0}$ is convex and has a unique minimizer., $C_\lambda$ is a non-increasing function of $\lambda$., $C_\lambda$ is a non-decreasing function of $\lambda$., None of the statements are true.
|
For $\lambda = 0$, the loss $L_{0}$ is convex and has a unique minimizer., $C_\lambda$ is a non-increasing function of $\lambda$., $C_\lambda$ is a non-decreasing function of $\lambda$., None of the statements are true.
|
C
|
mcq_train
|
mcq_train_47
|
Question: Which statement about extit{black-box} adversarial attacks is true:
Options: They require access to the gradients of the model being attacked. , They are highly specific and cannot be transferred from a model which is similar to the one being attacked., They cannot be implemented via gradient-free (e.g., grid search or random search) optimization methods., They can be implemented using gradient approximation via a finite difference formula.
|
They require access to the gradients of the model being attacked. , They are highly specific and cannot be transferred from a model which is similar to the one being attacked., They cannot be implemented via gradient-free (e.g., grid search or random search) optimization methods., They can be implemented using gradient approximation via a finite difference formula.
|
D
|
mcq_train
|
mcq_train_48
|
Question: Consider the function $f(x)=-x^{2}$. Which of the following statements are true regarding subgradients of $f(x)$ at $x=0$ ?
Options: A subgradient does not exist as $f(x)$ is differentiable at $x=0$., A subgradient exists but is not unique., A subgradient exists and is unique., A subgradient does not exist even though $f(x)$ is differentiable at $x=0$.
|
A subgradient does not exist as $f(x)$ is differentiable at $x=0$., A subgradient exists but is not unique., A subgradient exists and is unique., A subgradient does not exist even though $f(x)$ is differentiable at $x=0$.
|
D
|
mcq_train
|
mcq_train_49
|
Question: Consider the logistic regression loss $L: \R^d o \R$ for a binary classification task with data $\left( \xv_i, y_i
ight) \in \R^d imes \{0, 1\}$ for $i \in \left\{ 1, \ldots N
ight\}$:
egin{equation*}
L(\wv) = rac{1}{N} \sum_{i = 1}^N igg(\log\left(1 + e^{\xv_i^ op\wv}
ight) - y_i\xv_i^ op\wv igg).
\end{equation*}
Which of the following is a gradient of the loss $L$?
Options:
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; \xv_i igg( y_i - rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}}igg) $,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; \xv_i igg( rac{1}{1 + e^{-\xv_i^ op\wv}} - y_iigg) $,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; igg( rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}} - y_i\xv_i igg)$,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; igg( \xv_i rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}} - y_i\xv_i^ op\wvigg)$
|
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; \xv_i igg( y_i - rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}}igg) $,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; \xv_i igg( rac{1}{1 + e^{-\xv_i^ op\wv}} - y_iigg) $,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; igg( rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}} - y_i\xv_i igg)$,
abla L(\wv) = rac{1}{N} \sum_{i = 1}^N \; igg( \xv_i rac{e^{\xv_i^ op\wv}}{1 + e^{\xv_i^ op\wv}} - y_i\xv_i^ op\wvigg)$
|
B
|
mcq_train
|
mcq_train_50
|
Question: If the first column of matrix L is (0,1,1,1) and all other entries are 0 then the authority values
Options: (0, 1, 1, 1), (0, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)), (1, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)), (1, 0, 0, 0)
|
(0, 1, 1, 1), (0, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)), (1, 1/sqrt(3), 1/sqrt(3), 1/sqrt(3)), (1, 0, 0, 0)
|
B
|
mcq_train
|
mcq_train_51
|
Question: If the top 100 documents contain 50 relevant documents
Options: the precision of the system at 50 is 0.25, the precision of the system at 100 is 0.5, the recall of the system is 0.5, All of the above
|
the precision of the system at 50 is 0.25, the precision of the system at 100 is 0.5, the recall of the system is 0.5, All of the above
|
B
|
mcq_train
|
mcq_train_52
|
Question: Which of the following statements about index merging (when constructing inverted files) is correct?
Options: While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting, Index merging is used when the vocabulary does no longer fit into the main memory, The size of the final merged index file is O (n log2 (n) M )), where M is the size of the available memory, While merging two partial indices on disk, the vocabularies are concatenated without sorting
|
While merging two partial indices on disk, the inverted lists of a term are concatenated without sorting, Index merging is used when the vocabulary does no longer fit into the main memory, The size of the final merged index file is O (n log2 (n) M )), where M is the size of the available memory, While merging two partial indices on disk, the vocabularies are concatenated without sorting
|
A
|
mcq_train
|
mcq_train_53
|
Question: Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is false?
Options: The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot, LSI does not depend on the order of words in the document, whereas WE does, LSI is deterministic (given the dimension), whereas WE is not, LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not
|
The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot, LSI does not depend on the order of words in the document, whereas WE does, LSI is deterministic (given the dimension), whereas WE is not, LSI does take into account the frequency of words in the documents, whereas WE with negative sampling does not
|
D
|
mcq_train
|
mcq_train_54
|
Question: The number of non-zero entries in a column of a term-document matrix indicates:
Options: how many terms of the vocabulary a document contains, how often a term of the vocabulary occurs in a document, how relevant a term is for a document, none of the other responses is correct
|
how many terms of the vocabulary a document contains, how often a term of the vocabulary occurs in a document, how relevant a term is for a document, none of the other responses is correct
|
D
|
mcq_train
|
mcq_train_55
|
Question: Which of the following statements on Latent Semantic Indexing (LSI) and Word Embeddings (WE) is incorrect
Options: LSI is deterministic (given the dimension), whereas WE is not, LSI does not take into account the order of words in the document, whereas WE does, The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot, LSI does take into account the frequency of words in the documents, whereas WE does not.
|
LSI is deterministic (given the dimension), whereas WE is not, LSI does not take into account the order of words in the document, whereas WE does, The dimensions of LSI can be interpreted as concepts, whereas those of WE cannot, LSI does take into account the frequency of words in the documents, whereas WE does not.
|
D
|
mcq_train
|
mcq_train_56
|
Question: Modularity of a social network always:
Options: Increases with the number of communities, Increases when an edge is added between two members of the same community, Decreases when new nodes are added to the social network that form their own communities, Decreases if an edge is removed
|
Increases with the number of communities, Increases when an edge is added between two members of the same community, Decreases when new nodes are added to the social network that form their own communities, Decreases if an edge is removed
|
B
|
mcq_train
|
mcq_train_57
|
Question: Which of the following is wrong regarding Ontologies?
Options: We can create more than one ontology that conceptualizes the same real-world entities, Ontologies help in the integration of data expressed in different models, Ontologies dictate how semi-structured data are serialized, Ontologies support domain-specific vocabularies
|
We can create more than one ontology that conceptualizes the same real-world entities, Ontologies help in the integration of data expressed in different models, Ontologies dictate how semi-structured data are serialized, Ontologies support domain-specific vocabularies
|
C
|
mcq_train
|
mcq_train_58
|
Question: Which of the following statements is correct concerning the use of Pearson’s Correlation for user- based collaborative filtering?
Options: It measures whether different users have similar preferences for the same items, It measures how much a user’s ratings deviate from the average ratings I, t measures how well the recommendations match the user’s preferences, It measures whether a user has similar preferences for different items
|
It measures whether different users have similar preferences for the same items, It measures how much a user’s ratings deviate from the average ratings I, t measures how well the recommendations match the user’s preferences, It measures whether a user has similar preferences for different items
|
A
|
mcq_train
|
mcq_train_59
|
Question: After the join step, the number of k+1-itemsets
Options: is equal to the number of frequent k-itemsets, can be equal, lower or higher than the number of frequent k-itemsets, is always higher than the number of frequent k-itemsets, is always lower than the number of frequent k-itemsets
|
is equal to the number of frequent k-itemsets, can be equal, lower or higher than the number of frequent k-itemsets, is always higher than the number of frequent k-itemsets, is always lower than the number of frequent k-itemsets
|
B
|
mcq_train
|
mcq_train_60
|
Question: Modularity clustering will end up always with the same community structure?
Options: True, Only for connected graphs, Only for cliques, False
|
True, Only for connected graphs, Only for cliques, False
|
D
|
mcq_train
|
mcq_train_61
|
Question: For his awesome research, Tugrulcan is going to use the Pagerank with teleportation and HITS algorithm, not on a network of webpages but on the retweet network of Twitter! The retweet network is a directed graph, where nodes are users and an edge going out from a user A and to a user B means that "User A retweeted User B". Which one is FALSE about a Twitter bot that retweeted other users frequently but got never retweeted by other users or by itself?
Options: It will have a non-zero hub value., It will have an authority value of zero., It will have a pagerank of zero., Its authority value will be equal to the hub value of a user who never retweets other users.
|
It will have a non-zero hub value., It will have an authority value of zero., It will have a pagerank of zero., Its authority value will be equal to the hub value of a user who never retweets other users.
|
C
|
mcq_train
|
mcq_train_62
|
Question: When searching for an entity 𝑒𝑛𝑒𝑤 that has a given relationship 𝑟 with a given entity 𝑒
Options: We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒, We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒, We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒), We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒) for 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒
|
We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒, We search for 𝑒𝑛𝑒𝑤 that have a similar embedding vector to 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒, We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒), We search for pairs (𝑒𝑛𝑒𝑤, 𝑒) that have similar embedding to (𝑒𝑜𝑙𝑑, 𝑒) for 𝑒𝑜𝑙𝑑 which has relationship 𝑟 with 𝑒
|
C
|
mcq_train
|
mcq_train_63
|
Question: Which of the following graph analysis techniques do you believe would be most appropriate to identify communities on a social graph?
Options: Cliques, Random Walks, Shortest Paths, Association rules
|
Cliques, Random Walks, Shortest Paths, Association rules
|
A
|
mcq_train
|
mcq_train_64
|
Question: For which document classifier the training cost is low and inference is expensive?
Options: for none, for kNN, for NB, for fasttext
|
for none, for kNN, for NB, for fasttext
|
B
|
mcq_train
|
mcq_train_65
|
Question: In Ranked Retrieval, the result at position k is non-relevant and at k+1 is relevant. Which of the following is always true?
Hint: P@k and R@k are the precision and recall of the result set consisting of the k top-ranked documents.
Options: P@k-1>P@k+1, R@k-1=R@k+1, R@k-1<R@k+1, P@k-1=P@k+1
|
P@k-1>P@k+1, R@k-1=R@k+1, R@k-1<R@k+1, P@k-1=P@k+1
|
C
|
mcq_train
|
mcq_train_66
|
Question: For an item that has not received any ratings, which method can make a prediction?
Options: User-based collaborative RS, Item-based collaborative RS, Content-based RS, None of the above
|
User-based collaborative RS, Item-based collaborative RS, Content-based RS, None of the above
|
C
|
mcq_train
|
mcq_train_67
|
Question: The SMART algorithm for query relevance feedback modifies? (Slide 11 Week 3)
Options: The original document weight vectors, The original query weight vectors, The result document weight vectors, The keywords of the original user query
|
The original document weight vectors, The original query weight vectors, The result document weight vectors, The keywords of the original user query
|
B
|
mcq_train
|
mcq_train_68
|
Question: Suppose that in a given FP Tree, an item in a leaf node N exists in every path. Which of the following is TRUE?
Options: N co-occurs with its prefixes in every transaction, For every node P that is a parent of N in the FP tree, confidence (P->N) = 1, {N}’s minimum possible support is equal to the number of paths, The item N exists in every candidate set
|
N co-occurs with its prefixes in every transaction, For every node P that is a parent of N in the FP tree, confidence (P->N) = 1, {N}’s minimum possible support is equal to the number of paths, The item N exists in every candidate set
|
C
|
mcq_train
|
mcq_train_69
|
Question: Let $f_{\mathrm{MLP}}: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be an $L$-hidden layer multi-layer perceptron (MLP) such that $$ f_{\mathrm{MLP}}(\mathbf{x})=\mathbf{w}^{\top} \sigma\left(\mathbf{W}_{L} \sigma\left(\mathbf{W}_{L-1} \ldots \sigma\left(\mathbf{W}_{1} \mathbf{x}\right)\right)\right) $$ with $\mathbf{w} \in \mathbb{R}^{M}, \mathbf{W}_{1} \in \mathbb{R}^{M \times d}$ and $\mathbf{W}_{\ell} \in \mathbb{R}^{M \times M}$ for $\ell=2, \ldots, L$, and $\sigma$ is an entry-wise activation function. Also, let $f_{\mathrm{CNN}}: \mathbb{R}^{d} \rightarrow \mathbb{R}$ be an $L^{\prime}$-hidden layer convolutional neural network (CNN) such that $$ f_{\mathrm{CNN}}(\mathbf{x})=\mathbf{w}^{\top} \sigma\left(\mathbf{w}_{L^{\prime}} \star \sigma\left(\mathbf{w}_{L^{\prime}-1} \star \ldots \sigma\left(\mathbf{w}_{1} \star \mathbf{x}\right)\right)\right) $$ with $\mathbf{w} \in \mathbb{R}^{d}, \mathbf{w}_{\ell} \in \mathbb{R}^{K}$ for $\ell=1, \ldots, L^{\prime}$ and $\star$ denoting the one-dimensional convolution operator with zero-padding, i.e., output of the convolution has the same dimensionality as the input. Let's assume $\sigma$ is a tanh activation function. Thus, by flipping the signs of all of the weights leading in and out of a hidden neuron, the input-output mapping function represented by the network is unchanged. Besides, interchanging the values of all of the weights (i.e., by permuting the ordering of the hidden neurons within the layer) also leaves the network input-output mapping function unchanged. Suppose that, given the training data, SGD can find a solution with zero training loss, and the (absolute value) weights of such solution are non-zero and all unique. Choose the largest lower bound on the number of solutions (with zero training loss) achievable by $f_{\mathrm{MLP}}$ with $L=1$ and $M$ hidden units on this dataset.
Options: $M! 2^M$, $1$, $2^M$, $M !$
|
$M! 2^M$, $1$, $2^M$, $M !$
|
A
|
mcq_train
|
mcq_train_70
|
Question: Consider a linear regression problem with $N$ samples $\left\{\left(\boldsymbol{x}_{n}, y_{n}\right)\right\}_{n=1}^{N}$, where each input $\boldsymbol{x}_{n}$ is a $D$-dimensional vector $\{-1,+1\}^{D}$, and all output values are $y_{i} \in \mathbb{R}$. Which of the following statements is correct?
Options: Linear regression always "works" very well for $N \ll D$, A linear regressor works very well if the data is linearly separable., Linear regression always "works" very well for $D \ll N$, None of the above.
|
Linear regression always "works" very well for $N \ll D$, A linear regressor works very well if the data is linearly separable., Linear regression always "works" very well for $D \ll N$, None of the above.
|
D
|
mcq_train
|
mcq_train_71
|
Question: Recall that the hard-margin SVM problem corresponds to:
$$ \underset{\substack{\ww \in \R^d, \ orall i:\ y_i \ww^ op \xx_i \geq 1}}{\min} \Vert \ww \Vert_2.$$
Now consider the $2$-dimensional classification dataset corresponding to the $3$ following datapoints: $\xx_1 = (-1, 2)$, $\xx_2 = (1, 2)$, $\xx_3 = (0, -2)$ and $y_1 = y_2 = 1$, $y_3 = -1$.
Which of the following statements is extbf{true}:
Options: Our dataset is not linearly separable and hence it does not make sense to consider the hard-margin problem., There exists a unique $\ww^\star$ which linearly separates our dataset., The unique vector which solves the hard-margin problem for our dataset is $\ww^\star = (0, 1)$., None of the other statements are true.
|
Our dataset is not linearly separable and hence it does not make sense to consider the hard-margin problem., There exists a unique $\ww^\star$ which linearly separates our dataset., The unique vector which solves the hard-margin problem for our dataset is $\ww^\star = (0, 1)$., None of the other statements are true.
|
D
|
mcq_train
|
mcq_train_72
|
Question: Let $\mathcal{R}_{p}(f, \varepsilon)$ be the $\ell_{p}$ adversarial risk of a classifier $f: \mathbb{R}^{d} \rightarrow\{ \pm 1\}$, i.e., $$ \mathcal{R}_{p}(f, \varepsilon)=\mathbb{E}_{(\mathbf{x}, y) \sim \mathcal{D}}\left[\max _{\tilde{\mathbf{x}}:\|\mathbf{x}-\tilde{\mathbf{x}}\|_{p} \leq \varepsilon} \mathbb{1}_{\{f(\tilde{\mathbf{x}}) \neq y\}}\right], $$ for $p=1,2, \infty$. Which of the following relationships between the adversarial risks is true?
Options: $\mathcal{R}_{2}(f, \varepsilon) \leq \mathcal{R}_{1}(f, 2 \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{2}(f, \sqrt{d} \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{1}(f, \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{2}(f, \varepsilon / d)$
|
$\mathcal{R}_{2}(f, \varepsilon) \leq \mathcal{R}_{1}(f, 2 \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{2}(f, \sqrt{d} \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{1}(f, \varepsilon)$, $\mathcal{R}_{\infty}(f, \varepsilon) \leq \mathcal{R}_{2}(f, \varepsilon / d)$
|
B
|
mcq_train
|
mcq_train_73
|
Question: Consider a movie recommendation system which minimizes the following objective
rac{1}{2} \sum_{(d,n)\in\Omega} [x_{dn} - (\mathbf{W} \mathbf{Z}^ op)_{dn}]^2 + rac{\lambda_w}{2}
orm{\mathbf{W}}_ ext{Frob}^2 + rac{\lambda_z}{2}
orm{\mathbf{Z}}_ ext{Frob}^2
where $\mathbf{W}\in \R^{D imes K}$ and $\mathbf{Z}\in \R^{N imes K}$.
Suppose movies are divided into genre A and genre B (i.e., $\mathbf{W}_A\in \R^{D_A imes K}, \mathbf{W}_B\in \R^{D_B imes K}, \mathbf{W}=[\mathbf{W}_A; \mathbf{W}_B]$, with $D_A\!+\!D_B=D$) and users are divided into group 1 and group 2 (i.e., $\mathbf{Z}_1\in \R^{N_1 imes K}, \mathbf{Z}_2\in \R^{N_2 imes K}, \mathbf{Z}=[\mathbf{Z}_1; \mathbf{Z}_2]$, with $N_1\!+\!N_2=N$). In addition, group 1 users only rate genre A movies while group 2 users only rate genre B movies. Then instead of training a large recommendation system with $(\mathbf{W}, \mathbf{Z})$, one may train two smaller recommendation systems with parameters $(\mathbf{W_A}, \mathbf{Z_1})$ and $(\mathbf{W_B}, \mathbf{Z_2})$ separately. If SGD is used to solve the minimization problems and all conditions remain the same (e.g., hyperparameters, sampling order, initialization, etc), then which of the following statements is true about the two training methods?
Options: Feature vectors obtained in both cases remain the same. , Feature vectors obtained in both cases are different., Feature vectors obtained in both cases can be either same or different, depending on the sparsity of rating matrix., Feature vectors obtained in both cases can be either same or different, depending on if ratings in two groups and genres are evenly distributed.
|
Feature vectors obtained in both cases remain the same. , Feature vectors obtained in both cases are different., Feature vectors obtained in both cases can be either same or different, depending on the sparsity of rating matrix., Feature vectors obtained in both cases can be either same or different, depending on if ratings in two groups and genres are evenly distributed.
|
A
|
mcq_train
|
mcq_train_74
|
Question: Consider a Generative Adversarial Network (GAN) which successfully produces images of goats. Which of the following statements is false?
Options: The discriminator can be used to classify images as goat vs non-goat., The generator aims to learn the distribution of goat images., After the training, the discriminator loss should ideally reach a constant value., The generator can produce unseen images of goats.
|
The discriminator can be used to classify images as goat vs non-goat., The generator aims to learn the distribution of goat images., After the training, the discriminator loss should ideally reach a constant value., The generator can produce unseen images of goats.
|
A
|
mcq_train
|
mcq_train_75
|
Question: How does the bias-variance decomposition of a ridge regression estimator compare with that of the ordinary least-squares estimator in general?
Options: Ridge has a larger bias, and larger variance., Ridge has a larger bias, and smaller variance., Ridge has a smaller bias, and larger variance., Ridge has a smaller bias, and smaller variance.
|
Ridge has a larger bias, and larger variance., Ridge has a larger bias, and smaller variance., Ridge has a smaller bias, and larger variance., Ridge has a smaller bias, and smaller variance.
|
B
|
mcq_train
|
mcq_train_76
|
Question: Consider a linear regression model on a dataset which we split into a training set and a test set. After training, our model gives a mean-squared error of 0.1 on the training set and a mean-squared error of 5.3 on the test set. Recall that the mean-squared error (MSE) is given by:
$$MSE_{ extbf{w}}( extbf{y}, extbf{X}) = rac{1}{2N} \sum_{n=1}^N (y_n - extbf{x}_n^ op extbf{w})^2$$
Which of the following statements is extbf{correct} ?
Options: Retraining the model with feature augmentation (e.g. adding polynomial features) will increase the training MSE., Using cross-validation can help decrease the training MSE of this very model., Retraining while discarding some training samples will likely reduce the gap between the train MSE and the test MSE., Ridge regression can help reduce the gap between the training MSE and the test MSE.
|
Retraining the model with feature augmentation (e.g. adding polynomial features) will increase the training MSE., Using cross-validation can help decrease the training MSE of this very model., Retraining while discarding some training samples will likely reduce the gap between the train MSE and the test MSE., Ridge regression can help reduce the gap between the training MSE and the test MSE.
|
D
|
mcq_train
|
mcq_train_77
|
Question: You are given two distributions over $\mathbb{R}$ : Uniform on the interval $[a, b]$ and Gaussian with mean $\mu$ and variance $\sigma^{2}$. Their respective probability density functions are $$ p_{\mathcal{U}}(y \mid a, b):=\left\{\begin{array}{ll} \frac{1}{b-a}, & \text { for } a \leq y \leq b, \\ 0 & \text { otherwise } \end{array} \quad p_{\mathcal{G}}\left(y \mid \mu, \sigma^{2}\right):=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left(-\frac{(y-\mu)^{2}}{2 \sigma^{2}}\right)\right. $$ Which one(s) belong to the exponential family?
Options: Only Uniform., Both of them., Only Gaussian., None of them.
|
Only Uniform., Both of them., Only Gaussian., None of them.
|
C
|
mcq_train
|
mcq_train_78
|
Question: Church booleans are a representation of booleans in the lambda calculus. The Church encoding of true and false are functions of two parameters: Church encoding of tru: t => f => t Church encoding of fls: t => f => f What should replace ??? so that the following function computes not(b and c)? b => c => b ??? (not b)
Options: (not b), (not c), tru, fls
|
(not b), (not c), tru, fls
|
B
|
mcq_train
|
mcq_train_79
|
Question: To which expression is the following for-loop translated? for x <- xs if x > 5; y <- ys yield x + y
Options: xs.flatMap(x => ys.map(y => x + y)).withFilter(x => x > 5), xs.withFilter(x => x > 5).map(x => ys.flatMap(y => x + y)), xs.withFilter(x => x > 5).flatMap(x => ys.map(y => x + y)), xs.map(x => ys.flatMap(y => x + y)).withFilter(x => x > 5)
|
xs.flatMap(x => ys.map(y => x + y)).withFilter(x => x > 5), xs.withFilter(x => x > 5).map(x => ys.flatMap(y => x + y)), xs.withFilter(x => x > 5).flatMap(x => ys.map(y => x + y)), xs.map(x => ys.flatMap(y => x + y)).withFilter(x => x > 5)
|
C
|
mcq_train
|
mcq_train_80
|
Question: A multiset is an unordered collection where elements can appear multiple times. We will represent a multiset of Char elements as a function from Char to Int: the function returns 0 for any Char argument that is not in the multiset, and the (positive) number of times it appears otherwise: type Multiset = Char => Int The filter operation on a multiset m returns the subset of m for which p holds. What should replace ??? so that the filter function is correct? def filter(m: Multiset, p: Char => Boolean): Multiset = ???
Options: x => if m(x) then p(x) else 0, x => m(x) && p(x), x => if !m(x) then p(x) else 0, x => if p(x) then m(x) else 0
|
x => if m(x) then p(x) else 0, x => m(x) && p(x), x => if !m(x) then p(x) else 0, x => if p(x) then m(x) else 0
|
D
|
mcq_train
|
mcq_train_81
|
Question: The little Fermat theorem states that for a prime $n$ and any $b\in \mathbb{Z}_n ^\star$ we have\dots
Options: $b^{n-1}\mod n = 1$., $b^{n-1}\mod n = n$., $b^{n}\mod n = 1$., $b^{n-1}\mod n = b$.
|
$b^{n-1}\mod n = 1$., $b^{n-1}\mod n = n$., $b^{n}\mod n = 1$., $b^{n-1}\mod n = b$.
|
A
|
mcq_train
|
mcq_train_82
|
Question: The number of permutations on a set of $n$ elements
Options: is always greater than $2^n$, is approximately $n(\log n - 1)$, can be approximated using the Stirling formula, is independent of the size of the set
|
is always greater than $2^n$, is approximately $n(\log n - 1)$, can be approximated using the Stirling formula, is independent of the size of the set
|
C
|
mcq_train
|
mcq_train_83
|
Question: Select \emph{incorrect} statement. Complexity analysis of an attack consideres
Options: time complexity., memory complexity., probability of success., difficulty to understand a corresponding journal paper.
|
time complexity., memory complexity., probability of success., difficulty to understand a corresponding journal paper.
|
D
|
mcq_train
|
mcq_train_84
|
Question: Which one of these is \emph{not} a stream cipher?
Options: IDEA, RC4, A5/1, E0
|
IDEA, RC4, A5/1, E0
|
A
|
mcq_train
|
mcq_train_85
|
Question: Tick the \emph{correct} assertion regarding GSM.
Options: In GSM, the communication is always encrypted., The integrity of GSM messages is well protected., GSM uses the GSME cipher to encrypt messages., In GSM, the phone is authenticated to the network.
|
In GSM, the communication is always encrypted., The integrity of GSM messages is well protected., GSM uses the GSME cipher to encrypt messages., In GSM, the phone is authenticated to the network.
|
D
|
mcq_train
|
mcq_train_86
|
Question: Tick the \emph{wrong} assertion concerning 3G.
Options: In 3G, the network is authenticated to the phone., The integrity of 3G messages is well protected., In 3G, there is a counter to protect against replay attacks., 3G uses f8 for encryption.
|
In 3G, the network is authenticated to the phone., The integrity of 3G messages is well protected., In 3G, there is a counter to protect against replay attacks., 3G uses f8 for encryption.
|
A
|
mcq_train
|
mcq_train_87
|
Question: Tick the \textbf{false} statement.
Options: Cryptographic primitives used in Bluetooth are provably secure., In WEP, authentication is done with the pre-shared keys., The security of Bluetooth 2.0 pairing is based on PIN., Due to memory limitations, dummy devices can share the same key with everyone.
|
Cryptographic primitives used in Bluetooth are provably secure., In WEP, authentication is done with the pre-shared keys., The security of Bluetooth 2.0 pairing is based on PIN., Due to memory limitations, dummy devices can share the same key with everyone.
|
A
|
mcq_train
|
mcq_train_88
|
Question: Why do block ciphers use modes of operation?
Options: it is necessary for the decryption to work., to be provably secure., to use keys of any size., to encrypt messages of any size.
|
it is necessary for the decryption to work., to be provably secure., to use keys of any size., to encrypt messages of any size.
|
D
|
mcq_train
|
mcq_train_89
|
Question: If we pick independent random numbers in $\{1, 2, \dots, N\}$ with uniform distribution, $\theta \sqrt{N}$ times, we get at least one number twice with probability\dots
Options: $e^{\theta ^2}$, $1-e^{\theta ^2}$, $e^{-\theta ^2 /2}$, $1-e^{-\theta ^2 /2}$
|
$e^{\theta ^2}$, $1-e^{\theta ^2}$, $e^{-\theta ^2 /2}$, $1-e^{-\theta ^2 /2}$
|
D
|
mcq_train
|
mcq_train_90
|
Question: In practice, what is the typical size of an RSA modulus?
Options: 64 bits, 256 bits, 1024 bits, 8192 bits
|
64 bits, 256 bits, 1024 bits, 8192 bits
|
C
|
mcq_train
|
mcq_train_91
|
Question: The one-time pad is\dots
Options: A perfectly binding commitment scheme., A statistically (but not perfectly) binding commitment scheme., A computationally (but not statistically) binding commitment scheme., Not a commitment scheme.
|
A perfectly binding commitment scheme., A statistically (but not perfectly) binding commitment scheme., A computationally (but not statistically) binding commitment scheme., Not a commitment scheme.
|
D
|
mcq_train
|
mcq_train_92
|
Question: Tick the \textbf{false} statement.
Options: The identity element of $E_{a,b}$ is the point at infinity., If a point is singular on an Elliptic curve, we can draw a tangent to this point., $P=(x_p,y_p)$ and $Q=(x_p,-y_p)$ are the inverse of each other on an Elliptic curve of equation $y^2=x^3+ax+b$., Elliptic curve cryptography is useful in public-key cryptography.
|
The identity element of $E_{a,b}$ is the point at infinity., If a point is singular on an Elliptic curve, we can draw a tangent to this point., $P=(x_p,y_p)$ and $Q=(x_p,-y_p)$ are the inverse of each other on an Elliptic curve of equation $y^2=x^3+ax+b$., Elliptic curve cryptography is useful in public-key cryptography.
|
B
|
mcq_train
|
mcq_train_93
|
Question: Diffie-Hellman refers to \ldots
Options: a signature scheme., a public-key cryptosystem., a key-agreement protocol., the inventors of the RSA cryptosystem.
|
a signature scheme., a public-key cryptosystem., a key-agreement protocol., the inventors of the RSA cryptosystem.
|
C
|
mcq_train
|
mcq_train_94
|
Question: Consider the Rabin cryptosystem using a modulus $N=pq$ where $p$ and $q$ are both $\ell$-bit primes. What is the tightest complexity of the encryption algorithm?
Options: $O(\ell)$, $O(\ell^2)$, $O(\ell^3)$, $O(\ell^4)$
|
$O(\ell)$, $O(\ell^2)$, $O(\ell^3)$, $O(\ell^4)$
|
B
|
mcq_train
|
mcq_train_95
|
Question: Select the \emph{incorrect} statement.
Options: The non-deterministic encryption can encrypt one plaintext into many ciphertexts., The non-deterministic encryption always provides perfect secrecy., Plain RSA encryption is deterministic., ElGamal encryption is non-deterministic.
|
The non-deterministic encryption can encrypt one plaintext into many ciphertexts., The non-deterministic encryption always provides perfect secrecy., Plain RSA encryption is deterministic., ElGamal encryption is non-deterministic.
|
B
|
mcq_train
|
mcq_train_96
|
Question: Which mode of operation is similar to a stream cipher?
Options: ECB, OFB, CFB, CBC
|
ECB, OFB, CFB, CBC
|
B
|
mcq_train
|
mcq_train_97
|
Question: Select the \emph{incorrect} statement.
Options: The Discrete Logarithm can be solved in polynomial time on a quantum computer., The ElGamal cryptosystem is based on the Discrete Logarithm problem., The Computational Diffie-Hellman problem reduces to the Discrete Logarithm problem., The Discrete Logarithm is hard to compute for the additive group $\mathbf{Z}_{n}$.
|
The Discrete Logarithm can be solved in polynomial time on a quantum computer., The ElGamal cryptosystem is based on the Discrete Logarithm problem., The Computational Diffie-Hellman problem reduces to the Discrete Logarithm problem., The Discrete Logarithm is hard to compute for the additive group $\mathbf{Z}_{n}$.
|
D
|
mcq_train
|
mcq_train_98
|
Question: In Bluetooth, the link key $K_{link}$ is ...
Options: used to generate an epheremal key $K_{init}$., not used to generate the encryption key., used to authenticate devices., the input to the pairing protocol.
|
used to generate an epheremal key $K_{init}$., not used to generate the encryption key., used to authenticate devices., the input to the pairing protocol.
|
C
|
mcq_train
|
mcq_train_99
|
Question: Let $n=pq$ where $p$ and $q$ are prime numbers. We have:
Options: $\varphi (n) = n-1$, $\varphi (n) = pq$, $\varphi (n) = p + q$, $\varphi (n) = (p-1) (q-1)$
|
$\varphi (n) = n-1$, $\varphi (n) = pq$, $\varphi (n) = p + q$, $\varphi (n) = (p-1) (q-1)$
|
D
|
mcq_train
|
mcq_train_100
|
Question: Which of the following elements belongs to $\mathbb{Z}_{78}^*$?
Options: 46, 35, 21, 65
|
46, 35, 21, 65
|
B
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 16