I've added my custom NN model into my unity machine learning project, using mlagents, with this code:
import torch
import torch.nn as nn
import onnx
device = "cuda" if torch.cuda.is_available() else "cpu"
class EnhancedMLP(nn.Module):
def __init__(self, input_size, output_size):
super(EnhancedMLP, self).__init__()
# Doubling the number of layers and units
self.fc1 = nn.Linear(input_size, 256)
self.fc2 = nn.Linear(256, 256)
self.fc3 = nn.Linear(256, 256)
self.fc4 = nn.Linear(256, 256)
self.fc5 = nn.Linear(256, output_size)
self.activation = nn.ReLU()
def forward(self, x):
output = self.fc5(self.activation(
self.fc4(self.activation(self.fc3(self.activation(self.fc2(self.activation(self.fc1(x)))))))))
continuous_action_shape = torch.tensor([2], dtype=torch.int64) # Assuming 2 continuous actions
return output, continuous_action_shape
model = EnhancedMLP(input_size=96, output_size=2).to(device)
# set the model to inference mode
model.eval()
# Create a dummy input tensor
dummy_input = torch.randn(1, 96).to(device)
# ... [rest of your code]
# Export the model to ONNX format
torch.onnx.export(model, # model being run
dummy_input, # model input (or a tuple for multiple inputs)
"Albert.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=9, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names=['vector_observation'], # the model's input name
output_names=['continuous_actions', 'continuous_action_output_shape'], # the model's output names
dynamic_axes={'vector_observation': {0: 'batch_size'}, # dynamic axis for the input tensor
'continuous_actions': {0: 'batch_size'},
'continuous_action_output_shape': {0: 'batch_size'}})
# Load the exported ONNX model
torchmodel = onnx.load('Albert.onnx')
graph = torchmodel.graph
# Add the version_number field
version_number = onnx.helper.make_tensor("version_number", onnx.TensorProto.INT64, [1], [3])
graph.initializer.append(version_number)
# Add the version_number to the model's outputs
version_number_info = onnx.helper.make_tensor_value_info("version_number", onnx.TensorProto.INT64, shape=[])
graph.output.append(version_number_info)
# Add the memory_size field
memory_size = onnx.helper.make_tensor("memory_size", onnx.TensorProto.INT64, [1], [0])
graph.initializer.append(memory_size)
# Add the memory_size to the model's outputs
memory_size_info = onnx.helper.make_tensor_value_info("memory_size", onnx.TensorProto.INT64, shape=[])
graph.output.append(memory_size_info)
# Define the continuous_action_output_shape tensor value info
continuous_action_shape_info = onnx.helper.make_tensor_value_info(
"continuous_action_output_shape",
onnx.TensorProto.INT64,
[1]
)
# Add the continuous_action_output_shape to the model's outputs
graph.output.append(continuous_action_shape_info)
# Define and add the actual tensor value for continuous_action_output_shape
continuous_action_shape_tensor = onnx.helper.make_tensor(
"continuous_action_output_shape",
onnx.TensorProto.INT64,
[1],
[2] # Assuming 2 continuous actions
)
graph.initializer.append(continuous_action_shape_tensor)
# Save the modified ONNX model
onnx.save(torchmodel, 'ModifiedAlbert.onnx')
print("Model has been converted to ONNX and modified.")
# model.eval()
# dummy_input = torch.randn(1, 96)
# dummy_input = dummy_input.to(device)
# torch.onnx.export(model, dummy_input, "AlbertModelMLP01mod.onnx")
# return torch.tensor([1.0, 0.0]).to(device)
model = onnx.load("Albert.onnx")
print(model.graph.input) # Check the input tensor names
print(model.graph.output) # Check the output tensor names
Unfortunately, it throws this error when I press 'play': my project and the error in the console
The error: UnityAgentsException: Unknown tensorProxy expected as input : vector_observation
Unity.MLAgents.Inference.TensorGenerator.GenerateTensors (System.Collections.Generic.IReadOnlyList1[T] tensors, System.Int32 currentBatchSize, System.Collections.Generic.IList
1[T] infos) (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Inference/TensorGenerator.cs:178)
Unity.MLAgents.Inference.ModelRunner.DecideBatch () (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Inference/ModelRunner.cs:210)
Unity.MLAgents.Policies.BarracudaPolicy.DecideAction () (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Policies/BarracudaPolicy.cs:134)
Unity.MLAgents.Agent.DecideAction () (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Agent.cs:1402)
Unity.MLAgents.Academy.EnvironmentStep () (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Academy.cs:597)
Unity.MLAgents.AcademyFixedUpdateStepper.FixedUpdate () (at ./Library/PackageCache/com.unity.ml-agents@89a6357016/Runtime/Academy.cs:43)
Up to this point, I've always been able to fix the errors that came up by adding various parts of code after the 'Export the model to ONNX format' part, f.ex. version_number, memory_size etc. But I'm stuck here and not even GPT-4 can help me anymore, perhaps some of you can. I appreciate any insight into why the added NN model doesn't work.
Tried exporting the .onnx file many times with different changes made to the code but the error never disappeared.
just a tip, try using google instead of ChatGPT (or use phind instead), chatgpt doesn't have up to date information, so sometimes recent errors or issues are not in it's database. i looked it up on google, and according to this unity post, this error happens when your python packages are outdated, try reinstalling them or updating your pip/python version. this should fix the error