安装TensorFlow-GPU

���Ľ�������л�����Ķ�

安装GPU版本TensorFlow

准备

干净的系统,没有安装过Python,有的话就卸载了。 另外我的系统安装了VS2015 VS2017(这里我不知道是不是必备的)。
现在TensorFlow和cuda以及cuDNN品名升级,所以这里采用了几乎是最新版的了(2018年11月19日)

安装

1、安装Anaconda 这里省略。注意一点,安装的选项加入path,都勾选。
2、安装显卡驱动 默认安装。
3、安装cuda9.0 默认安装。
4、安装cuDNN 7.x 将压缩包解压,放在C:\ProgramData\NVIDIA GPU Computing Toolkit\v9.0这个目录下。
然后将目录C:\ProgramData\NVIDIA GPU Computing Toolkit\v9.0\bin添加到环境变量PATH里。

验证

1、启动Anaconda Prompt 输入conda env list 显示只有一个base或者root的环境。表示只有一个环境。

2、修改Anaconda的软件源 执行

1
2
3
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --set show_channel_urls yes

表示将anaconda的软件下载源修改成清华Tuna的了。

3、创建用于TensorFlow的Python环境

1
2
3
4
    conda create -n tf-gpu-py3.5 python=3.5
```

例子:

D:\Users\zyb>conda create -n tf-gpu-py3.5 python=3.5
Solving environment: done

## Package Plan ##

  environment location: C:\anaconda35\envs\tf-gpu-py3.5

  added / updated specs:
    - python=3.5


The following NEW packages will be INSTALLED:

    certifi:        2018.8.24-py35_1001 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
    pip:            18.0-py35_1001      https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
    python:         3.5.5-he025d50_2    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
    setuptools:     40.4.3-py35_0       https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
    vc:             14.1-h21ff451_1     https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/peterjc123
    vs2017_runtime: 15.4.27004.2010-1   https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/peterjc123
    wheel:          0.32.0-py35_1000    https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge
    wincertstore:   0.2-py35_1002       https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/conda-forge

Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
#     $ conda activate tf-gpu-py3.5
#
# To deactivate an active environment, use
#
#     $ conda deactivate
1
4、激活刚刚创建的环境
conda activate tf-gpu-py3.5
1
5、安装TensorFlow GPU版
conda install tensorflow-gpu
1
6、代码验证 启动python 输入如下代码
import tensorflow as tf
1
查看是否报错。 如果报错,就使用conda install 包名(比如numpy) 如果不报错,接着执行
a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[2,3],name='a')
b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[3,2],name='b')
c = tf.matmul(a,b)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
#这步结束之后,会出现一个警告:
#Device mapping: no known devices.
#2018-11-19 22:18:15.899459: I T:\src\github\tensorflow\tensorflow\core\common_runtime\direct_session.cc:288] Device mapping:
#不用管,执行下一步
print(sess.run(c))
#输出如下:
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:CPU:0
2018-11-19 22:18:23.059234: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:935] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:CPU:0
a: (Const): /job:localhost/replica:0/task:0/device:CPU:0
2018-11-19 22:18:23.064109: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:935] a: (Const)/job:localhost/replica:0/task:0/device:CPU:0
b: (Const): /job:localhost/replica:0/task:0/device:CPU:0
2018-11-19 22:18:23.069134: I T:\src\github\tensorflow\tensorflow\core\common_runtime\placer.cc:935] b: (Const)/job:localhost/replica:0/task:0/device:CPU:0
[[22. 28.]
 [49. 64.]]
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
验证成功。

## Ubuntu下安装GPU版TensorFlow
=======================

### 准备
--

> 1、Anaconda-Linux版本的——去清华tuna自行下载
> 2、显卡驱动——去官网自行下载 [点我去百度云下载3、4需要的文件](https://pan.baidu.com/s/1MjSKSkMKHjfqoY5nGuIXTQ)
> 3、cuda9.0——去官网自行下载Linux版本的
> 4、cuDNN7.x——去官网下载Linux版本的(需要注册并且join)

### 安装
--

1、Anaconda安装 这里需要注意,直接把软件安装在自己的家目录下即可。 安装完anaconda之后,需要去刷新你的环境变量。
source ~/home/tf/.bashrc
1
2
3
2、安装显卡驱动 官网下载驱动,然后使用sudo安装。 安装的过程中,第一步需要你阅读安装协议。使用q退出。 

3、安装cuda9.0 默认安装。 安装的过程中,第一步需要你阅读安装协议。使用q退出。 9.0有一个base安装包还有4个升级包。都是有序号的。 使用`sudo chmod +x *.run`给这5个文件加上可执行权限 然后一个个安装。
tf@lolita-ThinkStation-P318:~$ ls
anaconda3                        cuda_9.0.176.1_linux-1.run  cuda_9.0.176_384.81_linux-base.run  cuda_9.0.176.4_linux-4.run
Anaconda3-5.3.0-Linux-x86_64.sh  cuda_9.0.176.2_linux-2.run  cuda_9.0.176.3_linux-3.run          examples.desktop
tf@lolita-ThinkStation-P318:~$ ./cuda_9.0.176_384.81_linux-base.run 
Logging to /tmp/cuda_install_6527.log
Using more to view the EULA.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.

Do you accept the previously read EULA?
accept/decline/quit: accept

Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 384.81?
(y)es/(n)o/(q)uit: n

Install the CUDA 9.0 Toolkit?
(y)es/(n)o/(q)uit: y

Enter Toolkit Location
 [ default is /usr/local/cuda-9.0 ]: 

/usr/local/cuda-9.0 is not writable.
Do you wish to run the installation with 'sudo'?
(y)es/(n)o: y

Please enter your password: 
Do you want to install a symbolic link at /usr/local/cuda?
(y)es/(n)o/(q)uit: n

Install the CUDA 9.0 Samples?
(y)es/(n)o/(q)uit: y

Enter CUDA Samples Location
 [ default is /home/tf ]: 

Installing the CUDA Toolkit in /usr/local/cuda-9.0 ...
Installing the CUDA Samples in /home/tf ...
Copying samples to /home/tf/NVIDIA_CUDA-9.0_Samples now...
Finished copying samples.

===========
= Summary =
===========

Driver:   Not Selected
Toolkit:  Installed in /usr/local/cuda-9.0
Samples:  Installed in /home/tf

Please make sure that
 -   PATH includes /usr/local/cuda-9.0/bin
 -   LD_LIBRARY_PATH includes /usr/local/cuda-9.0/lib64, or, add /usr/local/cuda-9.0/lib64 to /etc/ld.so.conf and run ldconfig as root

To uninstall the CUDA Toolkit, run the uninstall script in /usr/local/cuda-9.0/bin

Please see CUDA_Installation_Guide_Linux.pdf in /usr/local/cuda-9.0/doc/pdf for detailed information on setting up CUDA.

***WARNING: Incomplete installation! This installation did not install the CUDA Driver. A driver of version at least 384.00 is required for CUDA 9.0 functionality to work.
To install the driver using this installer, run the following command, replacing <CudaInstaller> with the name of this run file:
    sudo <CudaInstaller>.run -silent -driver

Logfile is /tmp/cuda_install_6527.log
tf@lolita-ThinkStation-P318:~$ ./cuda_9.0.176.1_linux-1.run 
Logging to /tmp/cuda_patch_7307.log
Welcome to the CUDA Patcher.
Detected pager as 'more'.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.

Do you accept the previously read EULA?
accept/decline/quit: accept      

Enter CUDA Toolkit installation directory
 [ default is /usr/local/cuda-9.0 ]: 


Installation directory '/usr/local/cuda-9.0' is not writable! Ensure you are running with the correct permissions.

Options:
    --silent            : Specify a command-line, silent installation
    --installdir=dir    : Customize installation directory
    --accept-eula       : Implies acceptance of the EULA
    --help              : Print help message

    Specifying a silent installation also initiates a command-line installation.

tf@lolita-ThinkStation-P318:~$ ./cuda_9.0.176.2_linux-2.run 
Logging to /tmp/cuda_patch_7348.log
Welcome to the CUDA Patcher.
Detected pager as 'more'.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.

Do you accept the previously read EULA?
accept/decline/quit: accept

Enter CUDA Toolkit installation directory
 [ default is /usr/local/cuda-9.0 ]: 


Installation directory '/usr/local/cuda-9.0' is not writable! Ensure you are running with the correct permissions.

Options:
    --silent            : Specify a command-line, silent installation
    --installdir=dir    : Customize installation directory
    --accept-eula       : Implies acceptance of the EULA
    --help              : Print help message

    Specifying a silent installation also initiates a command-line installation.

tf@lolita-ThinkStation-P318:~$ ./cuda_9.0.176.3_linux-3.run 
Logging to /tmp/cuda_patch_7387.log
Welcome to the CUDA Patcher.
Detected pager as 'more'.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.

Do you accept the previously read EULA?
accept/decline/quit: accept

Enter CUDA Toolkit installation directory
 [ default is /usr/local/cuda-9.0 ]: 


Installation directory '/usr/local/cuda-9.0' is not writable! Ensure you are running with the correct permissions.

Options:
    --silent            : Specify a command-line, silent installation
    --installdir=dir    : Customize installation directory
    --accept-eula       : Implies acceptance of the EULA
    --help              : Print help message

    Specifying a silent installation also initiates a command-line installation.

tf@lolita-ThinkStation-P318:~$ ./cuda_9.0.176.4_linux-4.run 
Logging to /tmp/cuda_patch_7428.log
Welcome to the CUDA Patcher.
Detected pager as 'more'.
End User License Agreement
--------------------------


Preface
-------

The Software License Agreement in Chapter 1 and the Supplement
in Chapter 2 contain license terms and conditions that govern
the use of NVIDIA software. By accepting this agreement, you
agree to comply with all the terms and conditions applicable
to the product(s) included herein.


NVIDIA Driver


Description

This package contains the operating system driver and
fundamental system software components for NVIDIA GPUs.


NVIDIA CUDA Toolkit


Description

The NVIDIA CUDA Toolkit provides command-line and graphical
tools for building, debugging and optimizing the performance
of applications accelerated by NVIDIA GPUs, runtime and math
libraries, and documentation including programming guides,
user manuals, and API references.

Do you accept the previously read EULA?
accept/decline/quit: accept

Enter CUDA Toolkit installation directory
 [ default is /usr/local/cuda-9.0 ]: 


Installation directory '/usr/local/cuda-9.0' is not writable! Ensure you are running with the correct permissions.

Options:
    --silent            : Specify a command-line, silent installation
    --installdir=dir    : Customize installation directory
    --accept-eula       : Implies acceptance of the EULA
    --help              : Print help message

    Specifying a silent installation also initiates a command-line installation.
1
2

然后将安装完后的路径加入PATH环境变量。
export PATH=/usr/local/cuda-9.0/bin:/usr/local/cuda-9.0/lib64:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:$LD_LIBRARY_PATH

#使环境变量生效
source ~/home/tf/.bashrc
1
4、安装cuDNN 解压出来两个文件夹一个是include 一个是lib64
tf@lolita-ThinkStation-P318:~$ tar zxvf cudnn-9.0-linux-x64-v7.3.1.20.tar.gz 
cuda/include/cudnn.h
cuda/NVIDIA_SLA_cuDNN_Support.txt
cuda/lib64/libcudnn.so
cuda/lib64/libcudnn.so.7
cuda/lib64/libcudnn.so.7.3.1
cuda/lib64/libcudnn_static.a
tf@lolita-ThinkStation-P318:~$ ls
anaconda3                        cuda_9.0.176.1_linux-1.run          cuda_9.0.176.3_linux-3.run            examples.desktop
Anaconda3-5.3.0-Linux-x86_64.sh  cuda_9.0.176.2_linux-2.run          cuda_9.0.176.4_linux-4.run            NVIDIA_CUDA-9.0_Samples
cuda                             cuda_9.0.176_384.81_linux-base.run  cudnn-9.0-linux-x64-v7.3.1.20.tar.gz
tf@lolita-ThinkStation-P318:~$ sudo mv cuda/include/cudnn.h /usr/local/cuda-9.0/include/
tf@lolita-ThinkStation-P318:~$ sudo mv cuda/lib64/* /usr/local/cuda-9.0/lib64/
1
2
3
4
5

## 验证
--

0、cuda验证
#进入样本目录
cd ~/home/tf/NVIDIA_CUDA-9.0_Samples
#编译样本
make -j8
#进入生成可执行文件的目录
cd bin/x86_64/linux/release
#执行设备测试程序
./deviceQuery
#输出如下
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1070"
  CUDA Driver Version / Runtime Version          10.0 / 9.0
  CUDA Capability Major/Minor version number:    6.1
  Total amount of global memory:                 8116 MBytes (8510701568 bytes)
  (15) Multiprocessors, (128) CUDA Cores/MP:     1920 CUDA Cores
  GPU Max Clock rate:                            1683 MHz (1.68 GHz)
  Memory Clock rate:                             4004 Mhz
  Memory Bus Width:                              256-bit
  L2 Cache Size:                                 2097152 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
  Maximum Layered 1D Texture Size, (num) layers  1D=(32768), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(32768, 32768), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 2 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Supports Cooperative Kernel Launch:            Yes
  Supports MultiDevice Co-op Kernel Launch:      Yes
  Device PCI Domain ID / Bus ID / location ID:   0 / 1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 9.0, NumDevs = 1
Result = PASS
#看到PASS后执行带宽测试
./bandwidthTest 
#输出如下:
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GTX 1070
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432            12758.2

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432            12867.2

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)    Bandwidth(MB/s)
   33554432            191582.5

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
#看到PASS表示测试通过,如果FAIL,重启然后重新执行即可。
1
2

1、创建anaconda环境(和Windows一样)
conda create -n tf-gpu-py3.5 python=3.5
#
# To activate this environment, use
#
#     $ conda activate tf-gpu-py3.5
#
# To deactivate an active environment, use
#
#     $ conda deactivate
1
2、激活`tf-gpu-py3.5`
conda activate tf-py-3.5-cpu
1
3、安装`tensorflow-gpu`
conda install tensorflow-gpu
1
4、代码验证
(tf-gpu-py3.5) tf@lolita-ThinkStation-P318:~/anaconda3/envs$ python
Python 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 21:41:56) 
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> a = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[2,3],name='a')
>>> b = tf.constant([1.0,2.0,3.0,4.0,5.0,6.0],shape=[3,2],name='b')
>>> c = tf.matmul(a,b)
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2018-11-19 22:43:27.732910: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-11-19 22:43:27.824810: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-11-19 22:43:27.825419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: 
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:01:00.0
totalMemory: 7.93GiB freeMemory: 7.64GiB
2018-11-19 22:43:27.825445: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2018-11-19 22:43:27.995777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-19 22:43:27.995806: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971]      0 
2018-11-19 22:43:27.995826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0:   N 
2018-11-19 22:43:27.996035: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7377 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1
2018-11-19 22:43:28.026839: I tensorflow/core/common_runtime/direct_session.cc:288] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1

>>> print(sess.run(c))
MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
2018-11-19 22:44:23.662448: I tensorflow/core/common_runtime/placer.cc:935] MatMul: (MatMul)/job:localhost/replica:0/task:0/device:GPU:0
a: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2018-11-19 22:44:23.662561: I tensorflow/core/common_runtime/placer.cc:935] a: (Const)/job:localhost/replica:0/task:0/device:GPU:0
b: (Const): /job:localhost/replica:0/task:0/device:GPU:0
2018-11-19 22:44:23.662589: I tensorflow/core/common_runtime/placer.cc:935] b: (Const)/job:localhost/replica:0/task:0/device:GPU:0
[[22. 28.]
 [49. 64.]]

`
验证完毕。

0%