Tuesday, November 28, 2017

Logistic Regression, Linear Regression ,SVM, Decision Tree and Random Forest

Linear Regression:
In linear regression, the outcome (dependent variable) is continuous. It can have any one of an infinite number of possible values. In logistic regression, the outcome (dependent variable) has only a limited number of possible values.
Logistic Regression:
1.Logistic Regression is a model for classification, not regression
2.Performs very well on linearly separable classes.
3. It's one of the most widely used algorithms for classification in industry
4. The idea behind logistic regression :log(p)=log[p/(1-p)], and reverse the logit function we can predict the probability that a certain sample belongs to a particular class. F(z)=[1/(1+e^(-z))].
Support Vector Machine:
1. An  extension of the perceptron
2. Optimization objective is to maximize the margin( the distance between the separating hyperplane--decision boundary) and the training samples that are closest to this hyperplane.
3. Maximize margin: 2/|W|  that is minimize |W|/2
4. Another reason why is popular for machine learning is solving nonlinear problems using a kernel SVM
5. The concept behind kernel methods to deal with linearly inseparable data is to create nonlinear combinations of the original features to project them onto a higher-dimensional space via mapping function.
Decision Tree:
1. Break down the data by making decision based on asking a series of questions
2. Start at the tree root and split the data on the feature that results in the largest Information Gain(IG)
3.To avoid easily overfitting,  the model  need to be pruned by setting a limit for the maximal depth of the tree.
4. Advantage: attractive model in interpretability
Random Forest:
1. Advantages: good classification performance, scalability, and ease of use. Don't need to worry about choosing good hyperparameter values, and don' t need to prune the random forest since the ensemble model is quite robust to noise from the individual decision trees.
2. Can be considered as an ensemble of decision tree.
3. The idea behind random forest is to average multiple decision trees that individually suffer from high variance, to build a more robust model that has better generalization performance and is less susceptible to overfitting


No comments:

Post a Comment