I am working on binary classification of data and I want to know the advantages and disadvantages of using Support vector machine over decision trees and Adaptive Boosting algorithms.
Advantages of SVM over decion trees and AdaBoost algorithm
8.3k Views Asked by Akshay Kekre At
1
There are 1 best solutions below
Related Questions in MACHINE-LEARNING
- How to cluster a set of strings?
- Enforcing that inputs sum to 1 and are contained in the unit interval in scikit-learn
- scikit-learn preperation
- Spark MLLib How to ignore features when training a classifier
- Increasing the efficiency of equipment using Amazon Machine Learning
- How to interpret scikit's learn confusion matrix and classification report?
- Amazon Machine Learning for sentiment analysis
- What Machine Learning algorithm would be appropriate?
- LDA generated topics
- Spectral clustering with Similarity matrix constructed by jaccard coefficient
- Speeding up Viterbi execution
- Memory Error with Classifier fit and partial_fit
- How to find algo type(regression,classification) in Caret in R for all algos at once?
- Difference between weka tool's correlation coefficient and scikit learn's coefficient of determination score
- What are the approaches to the Big-Data problems?
Related Questions in CLASSIFICATION
- Feature selection SVM-Recursive Feature elimination (SVM-RFE) with Libsvm, the accuracy result is worse than without feature selection, why?
- How to find algo type(regression,classification) in Caret in R for all algos at once?
- scatter plot for a multiclass dataset with class imbalance and class overlapping
- Gaussian Naive Bayes classification
- Train And Use Classifier Weka In Java
- ROC curve in R using rpart package?
- Matlab example code for deep belief network for classification
- Chained CostSensitiveClassifier
- Difference between segmentation and classification
- How to train a LogisticRegression classifier to read from a second dataset?
- Tagging a phrase to learn a classifier using NLTK in Python
- Which classifiers provide weight vector?
- NaiveBayes Classifier: Do I have to concatenate all files of one class?
- Scikit Learn - Identifying target from loading a CSV
- Deciding output style for ANN classifier
Related Questions in SVM
- Feature selection SVM-Recursive Feature elimination (SVM-RFE) with Libsvm, the accuracy result is worse than without feature selection, why?
- How to interpret scikit's learn confusion matrix and classification report?
- must a dataset contain all factors in SVM in R
- Unable to run the method fit for svm (by scikit-learn)
- Cannot train KSVM in R
- R - One Class SVM classification with multiple predictions
- How to parallelize the training of an SVC with RBF kernel through MapReduce in scikit-learn for Python
- OpenCV HOG+SVM: assertion failed checkDetectorSize()
- SVM is not generating forecast using R
- Right function for normalizing input of sklearn SVM
- SVM in R language
- How can I train SVM in Matlab, with more than 2 classes
- Support vector machine in Python using libsvm example of features
- Support Vector Machine in Torch7
- Understanding an linear classification SVM
Related Questions in DECISION-TREE
- Kaggle Titanic: Machine Learning From Disaster Decision Tree for Cabin Prediction
- Complex conditional filter design
- training and testing image data with neural network tool in MATLAB
- What is the equivalent to rpart.plot in Python? I want to visualize the results of my random forest
- Can I manually create an RWeka decision (Recursive Partitioning) tree?
- What is causing this StackOverflowError?
- Why do I get this error below while using the Cubist package in R?
- create decision tree from data
- Scaling plots in the terminal nodes of ctree graph
- Saving decision tree's output into a text file
- Implement all possible questions on a node in Decision Tree in Sklearn?
- Decision Tree nltk
- How to implement decision trees in boosting
- scikit learn decision tree export graphviz - wrong class names in the decision tree
- Random forests performed under expectation
Related Questions in ADABOOST
- Ada in R giving me single classification
- How to implement decision trees in boosting
- Is it possible to combine HoG and AdaBoost algorithms for tracking?
- How to incorporate pre-trained perceptrons into AdaBoostClassifier?
- Adaboost and forward stagewise additive modeling
- Custom learner function for Adaboost
- Boosting algorithms with keras
- state-of-the-art of classification algorithms
- Obtain instance weights from AdaBoostM1 in Weka
- Implementing Adaboost for multiple dimensions in Java
- Updating NaiveBayes classifier in matlab
- serialize adaboost classifier scikit-learn
- Ratio of positive to negative data to use when training a cascade classifier (opencv)
- AdaBoostClassifier with algorithm='SAMME.R' requires. But I already add algorithm='SAMME.R'
- Python adaBoost all predicts are same class
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Something you might want to do is use weka, which is a nice package that you can use to plug in your data and then try out a bunch of different machine learning classifiers to see how each works on your particular set. It's a well-tread path for people who do machine learning.
Knowing nothing about your particular data, or the classification problem you are trying to solve, I can't really go beyond just telling you random things I know about each method. That said, here's a brain dump and links to some useful machine learning slides.
Adaptive Boosting uses a committee of weak base classifiers to vote on the class assignment of a sample point. The base classifiers can be decision stumps, decision trees, SVMs, etc.. It takes an iterative approach. On each iteration - if the committee is in agreement and correct about the class assignment for a particular sample, then it becomes down weighted (less important to get right on the next iteration), and if the committee is not in agreement, then it becomes up weighted (more important to classify right on the next iteration). Adaboost is known for having good generalization (not overfitting).
SVMs are a useful first-try. Additionally, you can use different kernels with SVMs and get not just linear decision boundaries but more funkily-shaped ones. And if you put L1-regularization on it (slack variables) then you can not only prevent overfitting, but also, you can classify data that isn't separable.
Decision trees are useful because of their interpretability by just about anyone. They are easy to use. Using trees also means that you can also get some idea of how important a particular feature was for making that tree. Something you might want to check out is additive trees (like MART).