Python深度学习库TensorFlow 1.12.2版本发布

版权申诉
0 下载量 187 浏览量 更新于2024-11-24 收藏 61.29MB ZIP 举报
资源摘要信息:"tensorflow-1.12.2-cp33-cp33m-macosx_10_11_x86_64.whl是一个针对macOS平台的TensorFlow Python库的安装包。TensorFlow是一个由Google开发的开源机器学习框架,广泛应用于人工智能和深度学习领域。该安装包适用于Python 3.3版本,并针对macOS 10.11及以上版本的64位x86架构的计算机进行了优化。 在使用该资源之前,需要先解压并安装该wheel文件。安装方法可以通过访问提供的链接来获取详细步骤,链接指向一个博客帖子,该帖子详细介绍了如何在Windows系统中安装TensorFlow,并且可能包含了macOS安装的相关说明或注意事项。Wheel是Python的包格式之一,安装相对简单,通常使用pip命令进行安装。 TensorFlow支持构建和训练机器学习模型,并广泛应用于各种场景,包括图像识别、语言处理、预测分析等。其版本1.12.2具有稳定性和性能优化的特点,适合需要进行大规模数据处理和复杂模型训练的研究者和开发者。 标签中提到的Python是一种广泛使用的高级编程语言,它因其简洁明了的语法和强大的库支持而受到开发者的喜爱。TensorFlow作为Python库的一部分,使得在Python环境下进行机器学习和深度学习变得更为便捷。人工智能(AI)是一个涵盖机器学习、深度学习、自然语言处理、计算机视觉等领域的广泛概念,TensorFlow作为AI领域的核心工具之一,为AI技术的发展和应用提供了重要的支持。 在进行机器学习或深度学习项目时,了解该资源的安装和使用是至关重要的。解压和安装TensorFlow库后,开发者可以使用Python编写代码,调用TensorFlow提供的各种API来构建和训练模型,进行数据处理、模型训练、模型评估和模型部署等操作。" 资源摘要信息:"tensorflow-1.12.2-cp33-cp33m-macosx_10_11_x86_64.whl是一个专门针对macOS系统的TensorFlow Python库安装包,适用于Python 3.3版本和macOS 10.11或更高版本。该安装包以wheel格式提供,代表了一种快速安装的Python包格式。wheel文件是通过pip安装的预编译二进制包,与传统的源码安装相比,可以显著提高安装速度。 由于该资源为官方提供,其安装包的稳定性和兼容性通常是有保障的。然而,安装之前,用户需要确保操作系统和Python环境符合要求。对于macOS用户来说,可能需要安装或升级Xcode Command Line Tools,以确保系统中包含编译TensorFlow所需的工具。 从提供的标签来看,TensorFlow主要与人工智能(AI)、深度学习和机器学习相关。TensorFlow不仅仅是一个机器学习库,它还提供了一整套工具和框架来构建和部署机器学习模型。它支持多种神经网络架构,并允许用户自定义各种层和操作,以构建复杂的神经网络模型。此外,TensorFlow的分布式计算能力使得它可以在多个CPU和GPU上进行扩展,以支持大规模的机器学习任务。 TensorFlow的设计强调灵活性和可移植性,因此被广泛应用于工业界和学术界。在安装了tensorflow-1.12.2-cp33-cp33m-macosx_10_11_x86_64.whl之后,开发者可以利用TensorFlow的强大功能进行各种机器学习和深度学习实验,从简单的线性回归到复杂的深度神经网络,TensorFlow都提供了相应的API和工具来支持。 总之,tensorflow-1.12.2-cp33-cp33m-macosx_10_11_x86_64.whl是一个适用于macOS平台的TensorFlow Python库的安装包,它能够帮助开发者快速搭建起深度学习和机器学习的开发环境。通过对该资源的正确安装和使用,开发者可以开始他们的机器学习之旅,从数据预处理到模型训练,再到最终的模型部署。"
2019-01-11 上传
自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl