I have a model which is trained on 33 datasets with SVM using LOOCV. I collected another 13 datasets which I divide like leave one out. In the testing phase, I combine datasets from training (33) and 12 from test and have a model which is trained on 45 datasets and test on the remaining datasets iteratively (similar to LOOCV). Is this method of testing right? All the recordings are independent of each other and can be reoffered as IID.
train and test set for a ML algorithm
38 Views Asked by Pallavi Patil At
1
There are 1 best solutions below
Related Questions in TESTING
- Using ES Modules with TS, and Jest testing(cannot use import statement outside module)
- Mocking AmazonS3 listObjects function in scala
- How to refer to the filepath of test data in test sourcecode?
- No tests found for given includes: [com.bright.TwitterAnalog.AuthenticationControllerSpec.Register user with valid request](--tests filter)
- Error WebMock::NetConnectNotAllowedError in testing with stub using minitest in rails (using Faraday)
- How to use Mockito for WebClient get call?
- Jest + JavaScript ES Modules
- How to configure api http request with load testing
- How can I make asserts on outbound HTTP requests?
- higher coefficient of determination values in the testing phase compared to the training phase
- Writing test methods with shared expensive set-up
- Slow performance when testing non-local IP services with Playwright
- uiState not updating in Tests
- Incorrect implementation of calloc() introduces division by zero and how to detect it via testing?
- How to test Creating and Cancelling Subscription in ThriveCart in Test Mode
Related Questions in LEAVE-ONE-OUT
- Subsetting the predictor connected to the observation that I'm predicting when performing a leave-one-out cross validation
- Regression - PCA - What to do with variables?
- R: GLM function error with simulated data
- leave one out method for multiple csv files in python
- train and test set for a ML algorithm
- Leave one out encoding with Dask dataframes
- F1 metric and LeaveOneOut validation strategy in scikit-learn
- output metrics TP, NP, TN, FN values for leave one out random forest model python
- In Leave One Out Cross Validation, How can I Use `shap.Explainer()` Function to Explain a Machine Learning Model?
- Why cross_val_score using LeaveOneOut() leads to nan validationscore?
- How can I implement a Leave One Patient Out Cross Validation in Python?
- how to use k-fold target encoding to encode categorical column in on one observation to pass it to the model?
- Write sklearn LOO splits to pandas dataframe with index as label column
- LeaveOneOut() - Key Error in an SVM Classifier
- Calculating LOOCV with adjusted R-square scoring parameter
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
No, LOOCV is only used for small datasets or when you want an accurate estimate of your model performance.
Let's say your train accuracy is 90%, your test accuracy may be 50%.
This is due to overfitting from the large train size and small test size.
Image of overfitting in ML models
Assuming your 45 dataset sizes are the same, your train test size will be 98% - 2%.
The general rule of thumb for train test size is 80% - 20%
You could use train_test_split, k-fold split, stratifiedshufflesplit etc. instead.