I am working on binary classification of data and I want to know the advantages and disadvantages of using Support vector machine over decision trees and Adaptive Boosting algorithms.
Advantages of SVM over decion trees and AdaBoost algorithm
8.2k Views Asked by Akshay Kekre At
1
There are 1 best solutions below
Related Questions in MACHINE-LEARNING
- Why does the Jenkins SVN plugin give error E170001 when connecting to my VisualSVN server?
- How to find the Git Revision Hash in a synced SVN repo using SubGit?
- In SVN, what's the different between "merge from a to b" an "merge from b to a"?
- Revert back to older version in TortoiseSVN and Commit
- SVN - folder checkout and merges
- Automatically Compile .Net UserControls
- SVN update not working
- maven-scm-plugin 1.9.4 - too old to work with working copy
- git-svn problems creating tags
- Sonarqube SVN Plugin fails with code E155007 'is not a working copy'
Related Questions in CLASSIFICATION
- Why does the Jenkins SVN plugin give error E170001 when connecting to my VisualSVN server?
- How to find the Git Revision Hash in a synced SVN repo using SubGit?
- In SVN, what's the different between "merge from a to b" an "merge from b to a"?
- Revert back to older version in TortoiseSVN and Commit
- SVN - folder checkout and merges
- Automatically Compile .Net UserControls
- SVN update not working
- maven-scm-plugin 1.9.4 - too old to work with working copy
- git-svn problems creating tags
- Sonarqube SVN Plugin fails with code E155007 'is not a working copy'
Related Questions in SVM
- Why does the Jenkins SVN plugin give error E170001 when connecting to my VisualSVN server?
- How to find the Git Revision Hash in a synced SVN repo using SubGit?
- In SVN, what's the different between "merge from a to b" an "merge from b to a"?
- Revert back to older version in TortoiseSVN and Commit
- SVN - folder checkout and merges
- Automatically Compile .Net UserControls
- SVN update not working
- maven-scm-plugin 1.9.4 - too old to work with working copy
- git-svn problems creating tags
- Sonarqube SVN Plugin fails with code E155007 'is not a working copy'
Related Questions in DECISION-TREE
- Why does the Jenkins SVN plugin give error E170001 when connecting to my VisualSVN server?
- How to find the Git Revision Hash in a synced SVN repo using SubGit?
- In SVN, what's the different between "merge from a to b" an "merge from b to a"?
- Revert back to older version in TortoiseSVN and Commit
- SVN - folder checkout and merges
- Automatically Compile .Net UserControls
- SVN update not working
- maven-scm-plugin 1.9.4 - too old to work with working copy
- git-svn problems creating tags
- Sonarqube SVN Plugin fails with code E155007 'is not a working copy'
Related Questions in ADABOOST
- Why does the Jenkins SVN plugin give error E170001 when connecting to my VisualSVN server?
- How to find the Git Revision Hash in a synced SVN repo using SubGit?
- In SVN, what's the different between "merge from a to b" an "merge from b to a"?
- Revert back to older version in TortoiseSVN and Commit
- SVN - folder checkout and merges
- Automatically Compile .Net UserControls
- SVN update not working
- maven-scm-plugin 1.9.4 - too old to work with working copy
- git-svn problems creating tags
- Sonarqube SVN Plugin fails with code E155007 'is not a working copy'
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Something you might want to do is use weka, which is a nice package that you can use to plug in your data and then try out a bunch of different machine learning classifiers to see how each works on your particular set. It's a well-tread path for people who do machine learning.
Knowing nothing about your particular data, or the classification problem you are trying to solve, I can't really go beyond just telling you random things I know about each method. That said, here's a brain dump and links to some useful machine learning slides.
Adaptive Boosting uses a committee of weak base classifiers to vote on the class assignment of a sample point. The base classifiers can be decision stumps, decision trees, SVMs, etc.. It takes an iterative approach. On each iteration - if the committee is in agreement and correct about the class assignment for a particular sample, then it becomes down weighted (less important to get right on the next iteration), and if the committee is not in agreement, then it becomes up weighted (more important to classify right on the next iteration). Adaboost is known for having good generalization (not overfitting).
SVMs are a useful first-try. Additionally, you can use different kernels with SVMs and get not just linear decision boundaries but more funkily-shaped ones. And if you put L1-regularization on it (slack variables) then you can not only prevent overfitting, but also, you can classify data that isn't separable.
Decision trees are useful because of their interpretability by just about anyone. They are easy to use. Using trees also means that you can also get some idea of how important a particular feature was for making that tree. Something you might want to check out is additive trees (like MART).