I have been performing text classification with tensorflow. I would like to know if this text classification model could be updated with every new data that I might acquire in future so that I would not have to train the model from scratch. Also, sometimes with time, the number of classes might also be more since I am mostly dealing with customer data. Is it possible to update this existing text classification model with data containing more number of classes by using the existing checkpoints?
Is it possible to update existing text classification model in tensorflow?
552 Views Asked by archkm At
1
There are 1 best solutions below
Related Questions in PYTHON
- How to store a date/time in sqlite (or something similar to a date)
- Instagrapi recently showing HTTPError and UnknownError
- How to Retrieve Data from an MySQL Database and Display it in a GUI?
- How to create a regular expression to partition a string that terminates in either ": 45" or ",", without the ": "
- Python Geopandas unable to convert latitude longitude to points
- Influence of Unused FFN on Model Accuracy in PyTorch
- Seeking Python Libraries for Removing Extraneous Characters and Spaces in Text
- Writes to child subprocess.Popen.stdin don't work from within process group?
- Conda has two different python binarys (python and python3) with the same version for a single environment. Why?
- Problem with add new attribute in table with BOTO3 on python
- Can't install packages in python conda environment
- Setting diagonal of a matrix to zero
- List of numbers converted to list of strings to iterate over it. But receiving TypeError messages
- Basic Python Question: Shortening If Statements
- Python and regex, can't understand why some words are left out of the match
Related Questions in TENSORFLOW
- A deterministic GPU implementation of fused batch-norm backprop, when training is disabled, is not currently available
- Keras similarity calculation. Enumerating distance between two tensors, which indicates as lists
- Does tensorflow have a way of calculating input importance for simple neural networks
- How to predict input parameters from target parameter in a machine learning model?
- Windows 10 TensorFlow cannot detect Nvidia GPU
- unable to use ignore_class in SparseCategoricalCrossentropy
- Why is this code not working? I've tried everything and everything seems to be fine, but no
- Why convert jpeg into tfrecords?
- ValueError: The shape of the target variable and the shape of the target value in `variable.assign(value)` must match
- The kernel appears to have died. It will restart automatically. whenever i try to run the plt.imshow() and plt.show() function in jupyter notebook
- Pneumonia detection, using transfer learning
- Cannot install tensorflow ver 2.3.0 (distribution not found)
- AttributeError: module 'keras._tf_keras.keras.layers' has no attribute 'experimental'
- Error while loading .keras model: Layer node index out of bounds
- prediction model with python tensorflow and keras, gives error when predicting
Related Questions in TEXT-CLASSIFICATION
- integrate huggingface inference endpoint with flowise
- How to automate report writing by extracting relevant text?
- Text clustering based on “stance” rather than the distribution of embeddings as the basis for clustering
- Not able to do grid search and train the model
- SVM algorithm training fitting doesnt work for text classification
- How to use GradCAM for text classification with 1D CNN
- Getting different probability scores for same text when passed in batches at the time of prediction for custom tuned BERT in text classification
- How to run Llama2 model on gpu in Macbook Pro M2 Max using Python
- Document Image Classification
- How to reset parameters from AutoModelForSequenceClassification?
- I can't get trainer accuracy
- Shap value for binary classification using Pre-Train Bert: How to extract summary graph?
- Hugging Face - ValueError: `create_and_replace` does not support prompt learning and adaption prompt yet
- speeding up zero-shot text classification in python
- Creating Embedding Matrix for LSTM Model with BERT Feature Representations on Arabic Dataset
Related Questions in RESUMING-TRAINING
- How to fix Pytorch RuntimeError: [enforce fail at inline_container.cc:588] . PytorchStreamWriter failed writing file data/17: file write failed
- How to resume training with different learning rate of the optimizer in pytorch?
- Can we add more epochs to already trained yolo nas model and start training from where i left off?
- How to continue training an exported JSON LSTM model to improve accuracy using Brain.js?
- ModuleNotFoundError: No module named 'adaptive_gridsampler.adaptive_gridsampler_cuda'
- How to save pytorch models to Google drive from Kaggle notebook?
- How to resume a pytorch training of a deep learning model while training stopped due to power issues or some other interrpts
- how continue recording evolution trainning on tensorflow model in same history file eachtime
- AttributeError: 'DataParallel' object has no attribute 'copy'
- Why does the loss spike up after compiling for a second time?
- TF2 object detection API issue with resuming training from saved checkpoint
- How to Resume Yolov3 training?
- Does Re-Compiling reset the model's weights?
- Improving one class in a pre-trained tensorflow object detection api model
- Loaded keras model fails to continue training, dimensions mismatch
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Given that you are asking 2 different question I'm now answering both separately:
1) Yes, you can continue the training with the new data you have acquired. This is very simple, you just need to restore your model as you do now to use it. Instead of running some placeholder like outputs, or prediction, you should run the optimizer operation. This translates into the following code:
note that you need to now the name of the operations defined by the model in order to run the optimizer (model.opt) and the loss op (model.loss) to train and monitor the loss during training.
2) If you want to change the number of labels you want to use then it is a bit more complicated. If your network is 1 layer feed forward then there is not much to do, because you need to change the matrix dimensionality then you need to retrain everything from scratch. On the other hand, if you have for example a multi-layer network (e.g. an LSTM + dense layer that do the classification) then you can restore the weights of the old model and just train from scratch the last layer. To do that i recommend you to read this answer https://stackoverflow.com/a/41642426/4186749