51. What Are The Three Stages To Build The Model In Machine Learning?
(a). Model building
(b). Model testing
(c). Applying the model
52. What is convergence in machine learning?
Informally, often refers to a state reached during training in which training loss and validation loss change very little or not at all with each iteration after a certain number of iterations.
In other words, a model reaches convergence when additional training on the current data will not improve the model. In deep learning, loss values sometimes stay constant or nearly so for many iterations before finally descending, temporarily producing a false sense of convergence.
53. Explain the difference between L1 and L2 regularization.
L2 regularization tends to spread error among all the terms, while L1 is more binary/sparse, with many variables either being assigned a 1 or 0 in weighting. L1 corresponds to setting a Laplace a prior on the terms, while L2 corresponds to a Gaussian prior.
54. What’s your favorite algorithm, and can you explain it to me in less than a minute?
This type of question tests your understanding of how to communicate complex and technical nuances with poise and the ability to summarize quickly and efficiently. Make sure you have a choice and make sure you can explain different algorithms so simply and effectively that a five-year-old could grasp the basics!
55. How is ML different from artificial intelligence?
AI involves machines that execute tasks which are programmed and based on human intelligence, whereas ML is a subset application of AI where machines are made to learn information. They gradually perform tasks and can automatically build models from the learnings.
56. Differentiate between statistics and ML.
In statistics, the relationships between relevant data (variables) are established; but in ML, the algorithms rely on data regardless of their statistical influence. In other words, statistics are concerned about inferences in the data whereas ML looks at optimization.
57. What are neural networks and where do they find their application in ML? Elaborate.
Neural networks are information processing models that derive their functions based on biological neurons found in the human brain. The reason they are the choice of technique in ML is that they help discover patterns in data that are sometimes too complex to comprehend by humans.
58. Differentiate between a parameter and a hyperparameter?
Parameters are attributes in training data that can be estimated during ML. Hyperparameters are attributes that cannot be determined beforehand in the training data. Example: Learning rate in neural networks.
59. What is ‘tuning’ in ML?
Generally, the goal of ML is to automatically provide accurate output from the vast amounts of input data without human intervention. Tuning is a process which makes this possible and it involves optimizing hyperparameters for an algorithm or an ML model to make them perform correctly.
60. What is optimization in ML?
Optimisation, in general, refers to minimizing or maximizing an objective function (in linear programming). In the context of ML, optimization refers to the tuning of hyperparameters which result in minimizing the error function (or loss function).