PaddleOCR slim models don't work on Sagemaker

148 Views Asked by At

I'm trying to host PaddleOCR model on AWS SageMaker endpoint. I've tried with 2 configurations:

I'm trying 2 different configurations:

  1. Normal (large):

  2. Slim:

Normal (1) model is successfully deployed. Inference time is ~1.5s. Slim (2) model is successfully deployed, but when it's invoked, the inference runs for a very long and fails after 60s.

System configuration:

  • ml.t2.medium Sagemaker instance
  • container 763104351884.dkr.ecr.us-east-2.amazonaws.com/pytorch-inference:2.0.1-cpu-py310-ubuntu20.04-sagemaker
  • Paddle: 2.5.0
  • PaddleOCR: 2.6.0.1

Why isn't Slim model able to run?

0

There are 0 best solutions below