RuntimeError: Error building extension ‘fused‘&FAILED: fused_bias_act_kernel.cuda.o&ninja: build sto

created at 08-19-2021 views: 86

error message

RuntimeError: Error building extension ‘fused’: [1/3] /usr/local/cuda/bin/nvcc -DTORCH_EXTENSI ON_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /root/miniconda3/lib/python3.8/site-pac kages/torch/include -isystem /root/miniconda3/lib/python3.8/site-packages/torch/include/torch/ csrc/api/include -isystem /root/miniconda3/lib/python3.8/site-packages/torch/include/TH -isyst em /root/miniconda3/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/cuda/inc lude -isystem /root/miniconda3/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_O PERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constex pr -gencode=arch=compute_86,code=sm_86 --compiler-options ‘-fPIC’ -std=c++14 -c /autodl-tmp/Ru n_dir/20210817172013/op/fused_bias_act_kernel.cu -o fused_bias_act_kernel.cuda.o
FAILED: fused_bias_act_kernel.cuda.o
/usr/local/cuda/bin/nvcc -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSION_H -isystem /root/miniconda3/lib/python3.8/site-packages/torch/include -isystem /root/miniconda3/lib/pyth on3.8/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda3/lib/python3 .8/site-packages/torch/include/TH -isystem /root/miniconda3/lib/python3.8/site-packages/torch/ include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/include/python3.8 -D_GL IBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_ HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=sm_86 --compiler-opti ons ‘-fPIC’ -std=c++14 -c /autodl-tmp/Run_dir/20210817172013/op/fused_bias_act_kernel.cu -o fu sed_bias_act_kernel.cuda.o
nvcc fatal : Unsupported gpu architecture ‘compute_86’
[2/3] c++ -MMD -MF fused_bias_act.o.d -DTORCH_EXTENSION_NAME=fused -DTORCH_API_INCLUDE_EXTENSI ON_H -isystem /root/miniconda3/lib/python3.8/site-packages/torch/include -isystem /root/minico nda3/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /root/miniconda 3/lib/python3.8/site-packages/torch/include/TH -isystem /root/miniconda3/lib/python3.8/site-pa ckages/torch/include/THC -isystem /usr/local/cuda/include -isystem /root/miniconda3/include/py thon3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /autodl-tmp/Run_dir/20210817172013/op/f used_bias_act.cpp -o fused_bias_act.o
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Devi ceType.h:8:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Devi ce.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Allo cator.h:6,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7 ,
from /autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:2:
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp: In function ‘at::Tensor fused_bias_a ct(const at::Tensor&, const at::Tensor&, const at::Tensor&, int, int, float, float)’:
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:11:22: warning: ‘at::DeprecatedTypePr operties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Te nsor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you w ere using data from type(), that is now available from Tensor itself, so instead of tensor.typ e().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
^
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:15:3: note: in expansion of macro ‘CH ECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:22:3: note: in expansion of macro ‘CH ECK_INPUT’
CHECK_INPUT(input);
^~~~~~~~~~~
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Tensor.h :3:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Context. h:4,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9 ,
from /autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:2:
/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:277:30: note : declared here
DeprecatedTypeProperties & type() const {
^~~~
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Devi ceType.h:8:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Devi ce.h:3,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/c10/core/Allo cator.h:6,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ATen.h:7 ,
from /autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:2:
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:11:22: warning: ‘at::DeprecatedTypePr operties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Te nsor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you w ere using data from type(), that is now available from Tensor itself, so instead of tensor.typ e().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
^
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:15:3: note: in expansion of macro ‘CH ECK_CUDA’
CHECK_CUDA(x);
^~~~~~~~~~
/autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:23:3: note: in expansion of macro ‘CH ECK_INPUT’
CHECK_INPUT(bias);
^~~~~~~~~~~
In file included from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Tensor.h :3:0,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/Context. h:4,
from /root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9 ,
from /autodl-tmp/Run_dir/20210817172013/op/fused_bias_act.cpp:2:
/root/miniconda3/lib/python3.8/site-packages/torch/include/ATen/core/TensorBody.h:277:30: note : declared here
DeprecatedTypeProperties & type() const {
^~~~
ninja: build stopped: subcommand failed.

solution

vim ~/.bashrc

Then add this sentence to the end:

export TORCH_CUDA_ARCH_LIST="7.5"

If you reduce your computing power, you won't report this error again.
Of course, the configuration of cuda and cudnn must be paired, so let's search for specific environment configuration. There are many.

created at:08-19-2021
edited at: 08-19-2021: