I'm training a model using MLRun and would like to log the model using experiment tracking. What kinds of things can I log with the model? I'm specifically looking for metrics (i.e. accuracy, F1, etc.) and plots like loss over time
How do I log a model with metrics and plots in MLRun?
260 Views Asked by Nick Schenone At
1
There are 1 best solutions below
Related Questions in PYTHON
- new thread blocks main thread
- Extracting viewCount & SubscriberCount from YouTube API V3 for a given channel, where channelID does not equal userID
- Display images on Django Template Site
- Difference between list() and dict() with generators
- How can I serialize a numpy array while preserving matrix dimensions?
- Protractor did not run properly when using browser.wait, msg: "Wait timed out after XXXms"
- Why is my program adding int as string (4+7 = 47)?
- store numpy array in mysql
- how to omit the less frequent words from a dictionary in python?
- Update a text file with ( new words+ \n ) after the words is appended into a list
- python how to write list of lists to file
- Removing URL features from tokens in NLTK
- Optimizing for Social Leaderboards
- Python : Get size of string in bytes
- What is the code of the sorted function?
Related Questions in MLOPS
- Grafana with kubeflow
- Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable
- KEDRO - How to specify an arbitrary binary file in catalog.yml?
- Best way to host multiple pytorch model files for inference?
- Sagemaker Monitor - MonitoringDatasetFormat as gz
- DVC GET error self._sslobj.do_handshake() Connection reset by peer
- mlflow not logging version of torchvision package
- How can I display logs for models served by TensorFlow Serving using GRPC?
- MLflow Deployment on Databricks: File Not Found Error During Inference
- Error while loading to kubeflow a pipeline.yaml file on local kubernetes cluster
- how to run model training using feature store databricks api
- How to log model using mlflow REST api? Does mlflow REST APIs support it?
- Invalid kube-config file. No configuration found
- Add reserved tokens to `tft.vocabulary`
- Is there mlflow REST api to hard delete experiments, runs?
Related Questions in NUCLIO
- Session and Auth in Nuclio. How to use it in proper way?
- How will a nuclio based kafka triggered service behave when it receives a serialized message
- helm install nuclio on kubernetes
- Facing Error while deploy the serving function in mlrun
- Can I use an Active Directory to manage the users and groups in Iguazio platform?
- How do I set Dask autoscaling using Iguazio?
- What are the different runtimes in MLRun?
- After creating a Jupyter service in Iguazio, I'm getting an error that mlrun is not installed
- Can I use Iguazio to serve a model on a REST API?
- How do I re-run specific experiments in Iguazio?
- How can I develop locally when using Iguazio platform?
- Spark job fails on image pull in Iguazio
- When would I use Spark Operator vs Spark Standalone in Iguazio?
- How do I log a model with metrics and plots in MLRun?
- function serving deployment failed
Related Questions in MLRUN
- Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable
- Issue during import, MLRunNotFoundError
- Issue with dynamic allocation in PySpark session (under MLRun and in K8s)
- Facing Error while deploy the serving function in mlrun
- How do I set Dask autoscaling using Iguazio?
- What are the different runtimes in MLRun?
- After creating a Jupyter service in Iguazio, I'm getting an error that mlrun is not installed
- How can I develop locally when using Iguazio platform?
- Spark job fails on image pull in Iguazio
- How do I log a model with metrics and plots in MLRun?
- Possible data Ingest count issue in FeatureStore
- MLRun Docker failed, port is already allocated
- KafkaSource connection to Confluent Kafka (with SSL & SchemaRegistry)
- function serving deployment failed
- how to use get_offline_features() in the mlrun.feature_store?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
MLRun has the ability to automatically log models with metrics and plots generated and attached.
You will use something like
The result is a model logged in the experiment tracking framework with metrics, code, logs, plots, etc. available per run. The MLRun auto-logger supports standard ML frameworks such as SciKit-Learn, TensorFlow (and Keras), PyTorch, XGBoost, LightGBM, and ONNX.
Alternatively, you can log something manually using the MLRun
contextobject that is available during the run. This lets you do things likecontext.log_model(...),context.log_dataset(...)orcontext.logger.info("Something happened"). More info on the MLRun execution context can be found here.