- Using virtual environments with Miniconda in Windows 11 avoids conflicts between versions of PythonTensorFlow, CUDA and cuDNN.
- TensorFlow 2.10 is the latest version with official GPU support via CUDA in Windows native, requiring CUDA 11.2 and cuDNN 8.1.
- The tensorflow-directml-plugin offers GPU acceleration via DirectML for GPUs NVIDIA, AMD and Intel without relying on CUDA.
- GPU verification in TensorFlow and benchmarks such as ai-benchmark demonstrate performance improvements of up to 8-10 times compared to CPU.

Configure TensorFlow to truly take advantage of the GPU in Windows 11 It can become a bit of an ordeal if you go in blind: incompatible versions, strange errors, outdated guides, and conflicts with previous Python, CUDA, or cuDNN installations. If you've ever wondered, "Why on earth does TensorFlow keep using the CPU when it has a powerful GPU?", this tutorial is for you.
After compiling and unifying information from various official guides and real-world practices, you will see How to get TensorFlow working with GPU on Windows 11 reliably using virtual environments (Conda/Miniconda) and you will see Python examples for AIThis explains which version combinations work, what options you have if your GPU is NVIDIA (CUDA) or if you want to use DirectML (AMD, Intel, NVIDIA), and how to verify that everything is configured correctly. The goal is that, when you're finished, you'll have a stable and isolated environment without breaking other Python installations or projects.
Hardware requirements and acceleration options in Windows 11
Before installing anything, it's crucial to know what type of acceleration you want to use. and if you hardware It's compatible. You mainly have two ways to use the GPU with TensorFlow in Windows 11: the classic NVIDIA stack (CUDA + cuDNN) or the plugin. TensorFlow-DirectML-Plugin, which runs on DirectX 12 and supports NVIDIA, AMD, and Intel GPUs.
If you have an NVIDIA GPU with CUDA support (For example, a GeForce RTX 2060, RTX 3060, or similar) you can follow the traditional approach with CUDA and cuDNN, which integrates best with TensorFlow 2.10 in native Windows. This method relies heavily on properly matching the versions: driversCUDA Toolkit, cuDNN, Python, and TensorFlow. For practical instructions on installing the CUDA Toolkit, you can consult specific guides on CUDA + cuDNN.
However, if your GPU is AMD or Intel, or you simply want a more flexible option, TensorFlow-DirectML-Plugin allows you to use the GPU through DirectML On Windows 10/11, both in native mode and under WSL. In this case, you don't depend on CUDA/cuDNN, but rather on DirectX 12 support and updated drivers for your graphics card.
Summary of typical minimum requirements for DirectML on Windows (According to Microsoft documentation) this includes: Windows 10 version 1709 or later, or Windows 11 21H2 or later; Python 3.7 to 3.10 on a 64-bit system; and a compatible GPU such as AMD Radeon R5/R7/R9 2xx or later, Intel HD Graphics 5xx or later, or NVIDIA GeForce GTX 9xx or later. If you're using CUDA directly, focus on an NVIDIA GPU with CUDA 3.5 architecture or later and up-to-date drivers.
In all cases, it is mandatory to have the latest GPU drivers installed. In Windows you can check for updates from “Settings > Windows Update > Check for updates” and, for NVIDIA, also from the GeForce Experience application or the official drivers website.
Choosing versions: why TensorFlow 2.10 is key in Windows
One of the biggest headaches when installing TensorFlow with GPU on Windows 11 The problem is that not all versions are compatible. Starting with TensorFlow 2.11, native CUDA support in Windows disappears; therefore, in practice, TensorFlow 2.10 is the last stable version you can use with GPUs via CUDA in Windows without resorting to other methods. WSL or Docker.
In a tested configuration on Windows 11 with Intel Core i7-11800H CPU, NVIDIA GeForce RTX 3060 Laptop GPU, 16 GB of RAM, TensorFlow 2.10 was used with Python 3.10 inside Conda, CUDA 11.2 and cuDNN 8.1Although the system had Python 3.12.6 and CUDA 12.3 installed globally, this separation was achieved precisely thanks to the use of a Conda virtual environment.
TensorFlow 2.10 is particularly sensitive to NumPyIt does not work with NumPy 2.x, so it is important to keep NumPy 1.23.5 or another supported 1.x version. If you already had NumPy 2 installed in your environment, you will need to reinstall the appropriate version before installing TensorFlow.
If you work with historical versions, TensorFlow 1.15 differentiated between CPU and GPU packageswith different names in pip (tensorflow and tensorflow-gpu). Starting with TensorFlow 2.x, the pip package tensorflow already includes integrated GPU support when the CUDA/cuDNN requirements are met, although on Windows this support is effectively limited to the 2.10 branch.
For those who choose DirectMLThe combination recommended by Microsoft is to use tensorflow-cpu==2.10 as a base and then add the tensorflow-directml-plugin package, which automatically activates the DirectML backend without changing your code.
Installing and setting up Miniconda on Windows 11
The cleanest way to avoid conflicts between versions of Python, TensorFlow, CUDA, and cuDNN It involves working with virtual environments. Miniconda is a lightweight and very convenient option for creating these isolated environments in Windows 11, while keeping the global installations intact.
First you need Download the Miniconda installer for Windows 64-bit (the x86-64 version). The installer usually comes as an .exe file. During installation, it is highly recommended that you do not select the "Add Miniconda to PATH" option to avoid interference with other Python installations you may have.
Once the installation wizard is complete, Restart your computer to ensure everything is registered correctly.Then, open a terminal (CMD o PowerShell) and check that conda is available by running:
conda --version
If you see a result like this conda 25.1.1 or similar, means that Miniconda has been installed correctly and you can now start creating specific virtual environments for TensorFlow with GPU.
Create a Conda environment for TensorFlow 2.10 and NVIDIA GPUs

With Miniconda up and running, the next step is to create an isolated environment To install TensorFlow 2.10 along with all its CUDA/cuDNN dependencies, you will ensure that the changes do not affect other projects or the overall Python environment of the system.
Opens Anaconda Prompt or PowerShell with Conda support and create a new environment, for example called tf-2.10, with Python 3.10:
conda create --name tf-2.10 python=3.10
When the environment creation is complete, Initialize Conda in your shell to activate it easily by running:
conda init
After that command, close and reopen the PowerShell or CMD window for the changes to take effect. Then you can activate the newly created environment with:
conda activate tf-2.10
With the tf-2.10 environment active, everything you install with pip or conda It will be encapsulated there, without being mixed with other installations. It is within this environment that you will install NumPy, TensorFlow 2.10, and the necessary CUDA/cuDNN libraries.
Install TensorFlow 2.10, NumPy compatible, CUDA and cuDNN in Conda
TensorFlow 2.10 has very specific library requirementsThe first thing to do is ensure that NumPy is not on the 2.x branch within the tf-2.10 environment. If you suspect that an incompatible version has been installed, you can set the correct version as follows:
pip install numpy==1.23.5
Once you have NumPy in an accepted version, Install TensorFlow 2.10 from pip within the same environment:
pip install tensorflow==2.10
As mentioned before, TensorFlow 2.10 for native Windows is tied to CUDA 11.2 and cuDNN 8.1To simplify things and avoid dealing with NVIDIA's global installers, you can install these libraries directly in the Conda environment using the conda-forge channel:
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1
With this approach, You don't depend on your machine's global CUDA Toolkit matching. with what TensorFlow requires. In fact, you can have, for example, CUDA 12.3 installed in Windows for other tasks, and at the same time use a "virtual" CUDA 11.2 within tf-2.10 without them conflicting with each other.
Keep in mind that the official TensorFlow guide also describes a “classic” procedure for installing CUDA and cuDNN at the system levelby adding their paths to the Windows PATH. If you decide to follow this traditional method, you will need to ensure that the installed versions exactly match those required by TensorFlow, and make sure that the file cuDNN64_8.dll It is present and accessible.
Classic installation of NVIDIA drivers, CUDA Toolkit 11.2 and cuDNN 8.1
If you prefer or need to have CUDA/cuDNN installed globally on Windows 11There is a widely used logical sequence that has also been tested on Windows 10 with GPUs such as the GeForce RTX 2060. This approach is based on NVIDIA's official stack and relies on some additional tools.
The first thing is to have a Microsoft Visual C++ compilerSince CUDA Toolkit integrates with Visual Studio to compile certain components, the easiest way is to install Microsoft Visual Studio with the C++ toolset enabled.
Then Install the latest drivers for your NVIDIA GPU from site downloads Official NVIDIA installer, selecting your specific model (for example, GeForce RTX 2060) and the corresponding Windows operating system. The installer usually offers a quick installation mode; in practice, you simply accept and proceed.
With the drivers in place, you can move on to Install CUDA Toolkit 11.2 From the NVIDIA archived versions file. There you choose the 11.2 branch, your operating system (Windows), and the installer type (.exe). During installation, it's common to opt for a "custom" mode, but in most cases, you can leave the default settings by simply clicking Next and OK.
The next step is to take care of cuDNN 8.1, the specific library for neural networks deepTo download it, you need to be registered on the NVIDIA developer portal. Once logged in, access the cuDNN version file and choose the variant that matches CUDA 11.2 for Windows.
The cuDNN package is not a classic installer, but a compressed file with a folder called “cuda” Inside, you'll find subfolders like bin, include, and lib, filled with .dll, .hy, and other files. What you need to do is copy the contents of these subfolders into the corresponding paths of your CUDA 11.2 installation, usually in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2, respecting the structure of bin, include and lib.
Once those files have been copied, it is important Review and adjust the Windows %PATH% environment variable. to include the directories where the CUDA, CUPTI, and cuDNN libraries reside. For example, for CUDA 11.0 (analogous to 11.2), the official documentation proposes commands on the table:
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\extras\CUPTI\lib64;%PATH%
SET PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include;%PATH%
SET PATH=C:\tools\cuda\bin;%PATH%
Adapting those paths to your specific CUDA version and the folder where you placed cuDNN (for example, C:\tools\cuda), you ensure that TensorFlow finds the necessary DLLs when it runs.
Install TensorFlow GPU in alternative Conda environments and test the GPU
There are other version combinations that have also been used successfully in Windowsespecially with older versions of TensorFlow. For example, for TensorFlow 2.6.0 with GPU in a Python 3.7 environment, you can create a Conda environment like this:
conda create -n test_tensorflow_gpu python=3.7
After creating the environment, is activated with:
conda activate test_tensorflow_gpu
and then it installs tensorflow-gpu 2.6.0 using pip:
pip install tensorflow-gpu==2.6.0
There are also examples of somewhat older environments with TensorFlow 2.1.0 and CUDA 10.1, where a Conda environment is created with Anaconda and Python 3.7.7, ipykernel is added and various scientific dependencies, Keras 2.3.1 and others are installed:
$ conda create -n entornoGPU anaconda python=3.7.7
$ conda activate entornoGPU
$ conda install ipykernel
$ python -m ipykernel install --user --name entornoGPU --display-name "entornoGPU"
$ conda install tensorflow-gpu==2.1.0 cudatoolkit=10.1
$ pip install tensorflow==2.1.0
$ pip install jupyter
$ pip install keras==2.3.1
$ pip install numpy scipy Pillow cython matplotlib scikit-image opencv-python h5py imgaug IPython
Whatever specific combination you choose (2.10, 2.6.0, 2.1.0, etc.), the pattern is always the same: create a clean virtual environment, install the compatible version of Python, add TensorFlow and the appropriate CUDA/cuDNN libraries, and finally, check that the GPU is visible.
A quick way to see if TensorFlow recognizes the GPU is to open Python and run:
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
print("GPUs available:", tf.config.list_physical_devices('GPU'))
The ideal output would look something like this:
TensorFlow version: 2.10.0
GPUs available:
If on the contrary, The list of GPUs appears empty ([]), TensorFlow is only using the CPU. In that case, it's advisable to close the terminal, reactivate the environment, check that the CUDA/cuDNN paths are correctly configured, that the NVIDIA driver is up to date, and that there are no version incompatibilities.
Using TensorFlow-DirectML-Plugin on Windows 11
For many Windows 11 users, especially those with AMD or Intel GPUs For those who want to avoid the complexity of CUDA/cuDNN, the tensorflow-directml-plugin offers an interesting alternative. This plugin leverages DirectML on DirectX 12 and allows you to accelerate TensorFlow without relying on the CUDA ecosystem.
The first requirement is that your Windows 10/11 is a compatible version (Windows 10 1709+ or Windows 11 21H2+), and that The GPU supports DirectX 12 and DirectMLAdditionally, you need Python x86-64 version 3.7, 3.8, 3.9 or 3.10, with 3.10 being the maximum version supported by this plugin.
It is recommended again Use Miniconda to create a virtual environmentAfter installing Miniconda, you can create an environment called, for example, tfdml_plugin with:
conda create --name tfdml_plugin python=3.9
conda activate tfdml_plugin
With the environment activated, you must install the base version of TensorFlow on CPU that the plugin requiresSpecifically, tensorflow-cpu==2.10, as it is not compatible with "normal" tensorflow or tensorflow-gpu:
pip install tensorflow-cpu==2.10
Next, the device itself is installed tensorflow-directml-plugin with a simple:
pip install tensorflow-directml-plugin
Once completed, your TensorFlow scripts should start using the DirectML backend transparentlywithout having to modify the code. If you already had models or notebooks, simply run them within the tfdml_plugin environment.
Integrate the Conda environment with PyCharm and other IDEs
If you use PyCharm or another IDE to develop in PythonIt is very useful to associate the IDE interpreter directly with the Conda environment you have created for TensorFlow, so that you run and debug projects using exactly the library versions you have configured.
In PyCharm, for example, you can go to File > Settings > Project: > Python interpreter and from there add the existing Conda environment. To do this, select “Add Interpreter > Add Local Interpreter”.
In the wizard, choose “Existing environment” and navigate to the path of your environment's Python executable, which usually looks like this:
C:\Users\<tu_usuario>\miniconda3\envs\tf-2.10\python.exe
Once selected, Choose the tf-2.10 environment (or whatever name you gave it)Confirm with OK and PyCharm will start using it for that project. This way, imports for TensorFlow, NumPy, CUDA/cuDNN, etc., will always be resolved from the correct environment.
Advanced GPU vs CPU tests and benchmarks
In addition to the basic check of “tf.config.list_physical_devices('GPU')”There are more advanced ways to ensure that TensorFlow is using the GPU and, incidentally, measure the actual performance difference compared to the CPU.
For installations like TensorFlow-gpu 2.6.0 in Python 3.7, you can write a little script in python that retrieves the list of local devices and displays detailed information about the GPU, CUDA version, and cuDNN version that TensorFlow is using internally. A typical example would be:
import tensorflow
from tensorflow.python.client import device_lib
def print_info():
print(' TensorFlow version: {}'.format(tensorflow.__version__))
print(' GPU: {}'.format())
print('Cuda Version -> {}'.format(tensorflow.sysconfig.get_build_info()))
print(' Cudnn Version -> {}'.format(tensorflow.sysconfig.get_build_info()))
print_info()
The output of this type of script tells you exactly what TensorFlow sees.The GPU name (e.g., NVIDIA GeForce RTX 2060), the compute capability, and the integrated CUDA/cuDNN versions. This allows you to confirm that the combinations are consistent and that the correct device is being used, and consult a Glossary of terms If you have any questions about the terminology.
If you want to take it a step further, tools like AI Benchmark They allow you to compare GPU and CPU performance across various popular neural networks (MobileNet, Inception, ResNet, VGG, etc.). To install it within your virtual environment with an active GPU:
pip install ai-benchmark
Then, from a Python interpreter, you can run a benchmark on the GPU with:
from ai_benchmark import AIBenchmark
benchmark_gpu = AIBenchmark(use_CPU=False)
benchmark_gpu.run_training()
The results show training times per model. and an overall “Device Training Score”. In tests with an RTX 2060, for example, MobileNet-V2 training with batch=50 and size 224×224 was around 325 ms per iteration on GPU.
To compare with CPU, you can run:
benchmark_cpu = AIBenchmark(use_CPU=True)
benchmark_cpu.run_training()
Under those same conditions, The CPU took approximately 3148 ms In MobileNet-V2 under the same scenario, the performance was nearly 10 times higher than the GPU. This difference, with variations, is repeated in other benchmark models, clearly illustrating the advantage of using the GPU for intensive training.
It is also possible to perform a more basic check in a TensorFlow 2.1.0/2.10 environment by running:
$ python
$ import tensorflow as tf
$ tf.__version__
$ tf.test.gpu_device_name()
If tf.test.gpu_device_name() returns something like “/device:GPU:0”The GPU is being detected and used by TensorFlow. If it returns an empty string, the CUDA/cuDNN stack probably needs to be configured correctly or there's a version incompatibility.
Combining these checks, setting up virtual environments with Miniconda, and carefully choosing versions (TensorFlow 2.10 for Windows native with CUDA 11.2 and cuDNN 8.1, or tensorflow-cpu 2.10 with the DirectML plugin), you can get TensorFlow to take advantage of your GPU on Windows 11 without going crazy with cryptic errors or breaking other Python installations you already have on your machine.
Passionate writer about the world of bytes and technology in general. I love sharing my knowledge through writing, and that's what I'll do on this blog, show you all the most interesting things about gadgets, software, hardware, tech trends, and more. My goal is to help you navigate the digital world in a simple and entertaining way.