Flask app deployed on PythonAnywhere with Machine Learning model not responding

144 Views Asked by At

I am new to PythonAnywhere and facing an issue while deploying my Flask app. The app works fine on my local PC, but I encountered problems when deploying it to PythonAnywhere. Specifically, when I send a POST request with a key "image" and upload an image in JPEG format, the request takes an unusually long time and eventually results in an error message saying "Something went wrong". Same goes with sending a POST request in /live_detect. Surprisingly, there are no relevant error messages in the error logs.

POST request = https://tomatosrapp.pythonanywhere.com/detect

app = Flask("Tomatect")
CORS(app)

model_path = "model.pt"
json_path = "json/detection_config.json"

prediction = CustomObjectDetection()
prediction.setModelTypeAsYOLOv3()
prediction.setModelPath(model_path)
prediction.setJsonPath(json_path)
prediction.loadModel()

video_detector = CustomVideoObjectDetection()
video_detector.setModelTypeAsYOLOv3()
video_detector.setModelPath(model_path)
video_detector.setJsonPath(json_path)
video_detector.loadModel()

@app.route("/detect", methods=["POST"])
def detect():
    if "image" not in request.files:
        return jsonify({"error": "No image uploaded"}), 400

    image = request.files["image"]
    #image.save(image.filename)

    detections = prediction.detectObjectsFromImage(input_image=image,
                                                   output_image_path=None)

    results = []
    for detection in detections:
        result = {
            "name": detection["name"],
            "percentage_probability": detection["percentage_probability"],
            "box_points": detection["box_points"]
        }
        results.append(result)

    return jsonify(results), 200

@app.route("/live_detect", methods=["POST"])
def live_detect():
    if "video" not in request.files:
        return jsonify({"error": "No video uploaded"}), 400

    video = request.files["video"]
    video.save(video.filename)

    def forFrame(frame_number, output_array, output_count):
        results = []
        for detection in output_array:
            result = {
                "name": detection["name"],
                "percentage_probability": detection["percentage_probability"]
            }
            results.append(result)

        print(results) 

    video_detector.detectObjectsFromVideo(input_file_path=video.filename,
                                          frames_per_second=20,
                                          frame_detection_interval=1,
                                          per_frame_function=forFrame,
                                          output_file_path="output_video.avi",
                                          minimum_percentage_probability=30)

    return "Live detection completed"

Btw, I have upgraded my PythonAnywhere account to the 2000 ms CPU and 10GB disk quota. Also, model.pt file is 235MB if it matters.

I have attempted to save the image to a temporary file as a potential solution, but the issue remains unresolved. Can PythonAnywhere effectively handle this type of project?

1

There are 1 best solutions below

0
Lost Soul On

I just resolved the issue by setting the number of threads

import torch
torch.set_num_threads(1)