Torch not compiled with CUDA enabled
时间: 2023-10-12 15:09:20 浏览: 121
As an AI language model, I cannot compile Torch with CUDA enabled for you. However, I can provide you with some guidance on how to do it yourself.
First, make sure you have installed the CUDA Toolkit on your system. You can download it from the NVIDIA website.
Next, install the NVIDIA cuDNN library. This is a library of primitives for deep neural networks that is highly optimized for NVIDIA GPUs.
Once you have installed these dependencies, you can compile Torch with CUDA enabled by following these steps:
1. Clone the Torch repository from GitHub:
```
git clone https://github.com/torch/distro.git ~/torch --recursive
```
2. Change into the Torch directory:
```
cd ~/torch
```
3. Run the installation script with the `--with-cuda` flag:
```
bash install-deps
./install.sh --with-cuda
```
4. Wait for the installation to complete. This may take some time, depending on your system.
5. Test that Torch is working with CUDA by running the following command:
```
th -e "require 'cutorch'; print(cutorch.getDeviceCount())"
```
If everything is working correctly, you should see the number of available CUDA devices printed to the console.
That's it! You should now be able to use Torch with CUDA-enabled operations.
阅读全文