Error when pytorch loads the model: RuntimeError: Cuda error: out of memory

created at 02-15-2022 views: 4

problem

When loading the trained model for testing, I encountered RuntimeError: Cuda error: out of memory.

I feel very surprised, obviously the model is not very big, how to burst the memory.

solution

Later, I found the answer on the pytorch forum. It turns out that when loading the model, it needs to be loaded on the cpu through the map_location parameter of torch.load(), and then put on the gpu.

def load_model(model, model_save_path, use_state_dict=True):
    print(f"[i] load model from {model_save_path}")
    device = torch.device("cpu")  # load to cpu first
    if use_state_dict:
        model.load_state_dict(torch.load(model_save_path, map_location=device))
    else:
        model = torch.load(model_save_path, map_location=device)
    print("[i] done")

    return model

model = Model(config)
model = load_model(model, config["model_save_path"], use_state_dict=True)
model.to(config["device"])  # to gpu

But the official did not seem to say the reason for this. . For details, please refer to:

created at:02-15-2022
edited at: 02-15-2022: