pytorch Runtime error:CUDA RUN OUT OF MEMORY

created at 07-07-2021 views: 4

I believe that most of the friends who use pytorch to run programs have encountered this problem on the server: run out of memory, in fact, it means that there is not enough memory.

1. When the bug prompt specifically indicates how much memory a certain gpu has used, the remaining memory is not enough
In this case, only batch_size needs to be reduced

2. No matter how you adjust the batch_size, an error will still be reported: run out of memory
This situation is because your pytorch version is too high, add the following code at this time

with torch.no_grad():
    output = net(input,inputcoord)

3. If there is no indication of how much memory has been used and how much memory is left

At this time, it may be because your pytorch version does not match the cuda version, then you can enter the following code in your terminal command line:

import torch
print(torch.__version__)
print(torch.version.cuda)
print(torch.backends.cudnn.version())
print(torch.cuda.is_available())

Through the above code, you can see your current torch version, cuda version, cudnn version, and whether torch can use gpu under the current cuda version. If it returns false, it is recommended to adjust the cuda version or the pytorch version.

Please log in to leave a comment.