1. How are Artificial Intelligence, Machine Learning, and Deep Learning different from each other?
Ans:
Artificial Intelligence is the broad domain focused on creating intelligent systems capable of performing human-like tasks. Machine Learning is a subset of AI where systems learn from data patterns automatically. Deep Learning is an advanced form of ML that uses multi-layer neural networks to solve highly complex problems such as vision, language processing, and voice recognition using large datasets and high computational power.
2. Explain supervised, unsupervised, and reinforcement learning with examples.
Ans:
Supervised learning uses labeled datasets to train models for prediction tasks like house price forecasting or spam detection. Unsupervised learning works on unlabeled data to discover hidden patterns such as customer grouping in marketing analytics. Reinforcement learning trains agents using reward-based feedback, such as robotics navigation or game-playing AI systems improving performance through continuous interaction with the environment.
3. What methods help reduce overfitting in machine learning?
Ans:
Overfitting happens when a model performs well on training data but poorly on new unseen data. Techniques like cross-validation, regularization methods such as L1 and L2, dropout layers in neural networks, data augmentation, and pruning of decision trees help improve model generalization ability and make predictions more reliable in production environments.
4. What is bias-variance tradeoff in ML?
Ans:
Bias-variance tradeoff represents the balance between model simplicity and model flexibility. High bias models oversimplify data leading to underfitting, while high variance models capture noise leading to overfitting. Techniques such as ensemble modeling, cross-validation, and hyperparameter tuning help achieve a balanced model with better prediction accuracy.
5. What is a confusion matrix and its evaluation metrics?
Ans:
A confusion matrix measures classification model performance using true positives, true negatives, false positives, and false negatives. Performance metrics like accuracy, precision, recall, and F1-score are derived to evaluate how well a model predicts outcomes and to identify areas where the model can be improved for better classification results.
6. Why are activation functions important in neural networks?
Ans:
Activation functions add non-linearity to neural networks, allowing them to learn complex relationships in data. Functions such as ReLU help reduce computation complexity, Sigmoid is used for probability outputs, and Tanh is used for normalized feature scaling. Without activation functions, neural networks would behave like simple linear models with limited learning capability.
7. How do you choose the best algorithm for a ML problem?
Ans:
Algorithm selection depends on dataset size, data type, accuracy requirements, computational cost, and model interpretability. For example, regression models work well for numerical prediction problems, while deep learning models perform better with images, text, and audio data. Ensemble methods are often preferred for high accuracy on structured datasets.
8. What is Gradient Descent and its types?
Ans:
Gradient Descent is an optimization technique used to minimize loss functions by updating model weights gradually in the direction of reduced error. Variants include Batch Gradient Descent using complete datasets, Stochastic Gradient Descent updating per sample, Mini-batch Gradient Descent using small data batches, and advanced optimizers like Adam that improve convergence speed and stability.
9. What challenges occur during AI/ML model deployment?
Ans:
Model deployment faces challenges like data drift, latency limitations, scalability requirements, and model explainability issues. Solutions include continuous retraining, container-based deployment using modern cloud infrastructure, and monitoring tools to track model performance in real-time production environments for reliability and consistency.
10. Describe a real-world AI/ML project and its impact.
Ans:
In a predictive maintenance project, machine data was analyzed to predict failures before they happened. Issues like missing values and class imbalance were solved using data cleaning and SMOTE balancing techniques. Feature selection and model tuning improved prediction accuracy, helping reduce system downtime and improve maintenance planning efficiency by around twenty percent.