I'm trying to compile TensorFlow after checking out the repo.
I've reached a point where I'm stuck with google protobuf errors:
INFO: From Compiling tensorflow/core/kernels/histogram_op_gpu.cu.cc:
./tensorflow/core/lib/core/status.h(32): warning: attribute "warn_unused_result" does not apply here
external/protobuf_archive/src/google/protobuf/arena.h(719): error: more than one instance of overloaded function "google::protobuf::Arena::CreateMessageInternal" matches the argument list:
function template "T *google::protobuf::Arena::CreateMessageInternal<T>(google::protobuf::Arena *)"
function template "T *google::protobuf::Arena::CreateMessageInternal<T,Args...>(Args &&...)"
argument types are: (google::protobuf::Arena *)
detected during:
instantiation of "Msg *google::protobuf::Arena::CreateMaybeMessage<Msg>(google::protobuf::Arena *, google::protobuf::internal::true_type) [with Msg=tensorflow::TensorShapeProto_Dim]"
(729): here
instantiation of "T *google::protobuf::Arena::CreateMaybeMessage<T>(google::protobuf::Arena *) [with T=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(648): here
instantiation of "GenericType *google::protobuf::internal::GenericTypeHandler<GenericType>::New(google::protobuf::Arena *) [with GenericType=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(675): here
instantiation of "GenericType *google::protobuf::internal::GenericTypeHandler<GenericType>::NewFromPrototype(const GenericType *, google::protobuf::Arena *) [with GenericType=tensorflow::TensorShapeProto_Dim]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(1554): here
instantiation of "TypeHandler::Type *google::protobuf::internal::RepeatedPtrFieldBase::Add<TypeHandler>(TypeHandler::Type *) [with TypeHandler=google::protobuf::RepeatedPtrField<tensorflow::TensorShapeProto_Dim>::TypeHandler]"
external/protobuf_archive/src/google/protobuf/repeated_field.h(2001): here
instantiation of "Element *google::protobuf::RepeatedPtrField<Element>::Add() [with Element=tensorflow::TensorShapeProto_Dim]"
bazel-out/local_darwin-opt/genfiles/tensorflow/core/framework/tensor_shape.pb.h(471): here
....
Has anyone bumped into this issue ? Any ideas on how to tackle this issue ?
(I'm using Python 2.7 in a virtual environment on OSX 10.11.5)
Luckily someone else already no only had the same issue, but also found a fix and shared it. Thanks to Daniel Trebbien's comments on protobuf and eigen I could compile tensorflow with GPU support on OSX:
>>> import tensorflow as tf
>>> tf.__version__
'1.6.0-rc0'
>>> sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
2018-02-19 22:22:12.194516: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:859] OS X does not support NUMA - returning NUMA node zero
2018-02-19 22:22:12.195011: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1331] Found device 0 with properties:
name: GeForce GT 750M major: 3 minor: 0 memoryClockRate(GHz): 0.9255
pciBusID: 0000:01:00.0
totalMemory: 2.00GiB freeMemory: 12.58MiB
2018-02-19 22:22:12.195038: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1410] Adding visible gpu devices: 0
2018-02-19 22:22:14.563665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-02-19 22:22:14.563700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0
2018-02-19 22:22:14.563707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N
2018-02-19 22:22:14.563798: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1021] Creating TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 65 MB memory) -> physical GPU (device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0)
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0
2018-02-19 22:22:14.697626: I tensorflow/core/common_runtime/direct_session.cc:297] Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0, compute capability: 3.0
For reference, here are the patches proposed in the comments:
--- a/tensorflow/workspace.bzl
+++ b/tensorflow/workspace.bzl
@@ -353,11 +353,11 @@ def tf_workspace(path_prefix="", tf_repo_name=""):
tf_http_archive(
name = "protobuf_archive",
urls = [
- "https://mirror.bazel.build/github.com/google/protobuf/archive/396336eb961b75f03b25824fe86cf6490fb75e3a.tar.gz",
- "https://github.com/google/protobuf/archive/396336eb961b75f03b25824fe86cf6490fb75e3a.tar.gz",
+ "https://mirror.bazel.build/github.com/dtrebbien/protobuf/archive/50f552646ba1de79e07562b41f3999fe036b4fd0.tar.gz",
+ "https://github.com/dtrebbien/protobuf/archive/50f552646ba1de79e07562b41f3999fe036b4fd0.tar.gz",
],
- sha256 = "846d907acf472ae233ec0882ef3a2d24edbbe834b80c305e867ac65a1f2c59e3",
- strip_prefix = "protobuf-396336eb961b75f03b25824fe86cf6490fb75e3a",
+ sha256 = "eb16b33431b91fe8cee479575cee8de202f3626aaf00d9bf1783c6e62b4ffbc7",
+ strip_prefix = "protobuf-50f552646ba1de79e07562b41f3999fe036b4fd0",
)
--- a/tensorflow/workspace.bzl
+++ b/tensorflow/workspace.bzl
@@ -120,11 +120,11 @@ def tf_workspace(path_prefix="", tf_repo_name=""):
tf_http_archive(
name = "eigen_archive",
urls = [
- "https://mirror.bazel.build/bitbucket.org/eigen/eigen/get/2355b229ea4c.tar.gz",
- "https://bitbucket.org/eigen/eigen/get/2355b229ea4c.tar.gz",
+ "https://mirror.bazel.build/bitbucket.org/dtrebbien/eigen/get/374842a18727.tar.gz",
+ "https://bitbucket.org/dtrebbien/eigen/get/374842a18727.tar.gz",
],
- sha256 = "0cadb31a35b514bf2dfd6b5d38205da94ef326ec6908fc3fd7c269948467214f",
- strip_prefix = "eigen-eigen-2355b229ea4c",
+ sha256 = "fa26e9b9ff3a2692b092d154685ec88d6cb84d4e1e895006541aff8603f15c16",
+ strip_prefix = "dtrebbien-eigen-374842a18727",
build_file = str(Label("//third_party:eigen.BUILD")),
)