I seem to be having some problem loading the model on the LLM Inference page.
https://mediapipe-studio.webapps.google.com/studio/demo/llm_inference
After downloading Gemma from Kaggle I tried loading the config.json file, but it didn't work. Do I need a .bin file?
This is not a WebGPU error, I'm using Opera.
You need to select the TensorFlow Lite Tab
then select the gemma-2b-it-gpu-int4 variation and download it. The downloaded file is an
archive.tar.gzwhen you decompress it, you will get agemma-2b-it-gpu-int4.binthis is the file you need to select in the mediapipe studio. I hope it helps :)