I am running Stable Diffusion Automatic1111 on an Nvidia card with 12 GB of VRAM. I just completed the installation of TensorRT Extension. However, every time I launch webAI-user.bat I get the error message that begins: "The procedure entry point ?destroyTensorDescriptorEx..." This message pops up just when webAI_user.bat gets to "Launching Web Ui with arguments: --xformers --nohalf vae --medvram-sdxl --no-half". (Please see attached JPG.) I can't tell if TensorRT is actually speeding things up or not; I haven't used it much yet. But even if it does work, I would like to get rid of the popup error message. I have to click multiple times to get it to close. Any help will be greatly appreciated, but sorry, I don't know Python. Zaffer
webAI-user.bat Python error message after installing TensorRT
607 Views Asked by Zaffer At
1
There are 1 best solutions below
Related Questions in TENSORRT
- Run Tensorflow with NVIDIA TensorRT Inference Engine
- ScatterND Plugin not found while converting onnx into tensorrt model
- How to train Faster R-CNN (TensorRT) on my dataset for NVIDIA Jetson Nano
- Force TensorRT to run on CPU, or convert trt model back to onnx?
- How to convert a U-Net segmentation model to TensorRT on NVIDIA Jetson Nano ? (process killed error)
- TensorRT is not using float16 (or how to check?)
- use NCU with tensorRT, but got No kernels were profiled
- TensorRT seems to lack common functionality
- Tensorflow saved_model loading issue in version 2.7
- Unable to use tensorflow with GPU. Error: Could not load dynamic library 'libnvinfer.so.7'. Export LD_LIBRARY_PATH could not solve issue
- webAI-user.bat Python error message after installing TensorRT
- How to convert torchvision MaskRCNN model to TensorRT?
- Couldn't create backend context from arcface engine file
- Why does TensorRT enqueueV2 take longer time when using more isolated threads in C++?
- How to pre-compile TensorRT models on a different machine than what would be used for inference?
Related Questions in STABLE-DIFFUSION
- Explain the loss takes x and output(x) to evaluate in Diffusion
- I want to use the ADetailer API
- webAI-user.bat Python error message after installing TensorRT
- How do I fix the - ModuleNotFoundError: No module named 'pyngrok' error code that comes up in google colab
- Running stable diffusion gives No module named 'ldm.util' error
- which platform and configuration will be best for deploying my custom trained Stable diffusion XL model
- Problems with pip and torch when installing Stable Diffusion
- Fedora 39: Stable Diffusion Automatic1111 : INCOMPATIBLE PYTHON VERSION error
- How to fix 'Mul' Op has type float32 that does not match type float64?
- Need assistance in setting up stable diffusion with Anaconda Navigator
- Error verifying pickled file from /.../unet/diffusion_pytorch_model.bin (Dreambooth on Colab)
- how to fine-tune stable diffusion in sagemaker jumpstart with different descriptions per image
- Offline Execution of StableDiffusionPipeline from_single_file in Python
- stable diffusion with controlnet, api
- How to Achieve More Uniform Image Generation in Time-Lapse Style with Text-to-Image Models?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?

I found that by following these directions on the github Nvidia page, I got rid of the popup error. The directions were actually posted in response to a different "not found" problem, but they worked to solve mine. Here's the link. Zaffer
https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/27#issuecomment-1767570566