安装参数:Tensorflow(GPU)Keras Conda环境NVIDIA CUDA NVIDIA cuDNN似乎可以正常使用路径集 . 如果您查看最后一行,则消息将停止在“提示:如果您正在” . 我需要一些建议 .

我的规格:硬件系统:Intel i7-8核心/ 2 Terrabyte HD / 16 GB RAM / CPU速度4.1MHz NVIDIA GTX650(8 GB RAM)NVIDIA CUDA 9.0 NcuDNN 9.0

摘要(型号)_________________________________________________________________________________Layer(type)输出形状参数#===================================== ============================================ conv2d_1(Conv2D)(无,198,198,32)896 _________________________________________________________________________________conv2d_2(Conv2D)(无,196,196,32)9248 __________________________________________________________________________________________max_pooling2d_1(MaxPooling2D)(无,98,98,32)0 _________________________________________________________________________________dropout_1(辍学)(无,98,98,32)0 _________________________________________________________________________________conv2d_3(Conv2D)(无,96,96,64)18496 _________________________________________________________________________________conv2d_4(Conv2D)(无,94,94,64)36928 __________________________________________________________________________________________max_pooling2d_2(MaxPooling2D)(非e,47,47,64)0 _________________________________________________________________________________dropout_2(辍学)(无,47,47,64)0 _________________________________________________________________________________flatten_1(Flatten)(无,141376)0 _________________________________________________________________________________dense_1(密集)(无,512)72385024 _________________________________________________________________________________dropout_3(辍学)(无, 512)0 _________________________________________________________________________________dense_2(密集)(无,3)1539

总参数:72,452,131可训练参数:72,452,131非训练参数:0 _____#模型拟合>历史< - 模型%>%拟合(火车,火车标签,时期= 60,batch_size = 32,validation_split = 0.20,validation_data = list(测试, testLabels))______________模型拟合张量流(GPU)输出

2018-08-08 10:31:05.185537:IT:\ src \ github \ tensorflow \ tensorflow \ core \ platform \ cpu_feature_guard.cc:141]您的CPU支持未编译此TensorFlow二进制文件的指令:AVX2 2018-08 -08 10:31:05.383384:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ gpu \ gpu_device.cc:1392]找到具有属性的设备0:名称:GeForce GTX 650 major:3 minor:0 memoryClockRate( GHz):1.202 pciBusID:0000:01:00.0 totalMemory:2.00GiB freeMemory:1.66GiB 2018-08-08 10:31:05.383846:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ gpu \ gpu_device.cc :1471]添加可见的gpu设备:0 2018-08-08 10:31:05.646717:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ gpu \ gpu_device.cc:952]设备互连StreamExecutor,强度为1 edge矩阵:2018-08-08 10:31:05.646966:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ gpu \ gpu_device.cc:958] 0 2018-08-08 10:31:05.647074:IT: \ SRC \ github上\ tensorflow \ tensorflow \核心\ common_runtime \ GPU \ gpu_device.cc:971 ] 0:N 2018-08-08 10:31:05.647267:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ gpu \ gpu_device.cc:1084]创建TensorFlow设备(/ job:localhost / replica:0 / task:0 / device:GPU:0,1444 MB内存) - >物理GPU(设备:0,名称:GeForce GTX 650,pci总线ID:0000:01:00.0,计算能力:3.0)2018-08-08 10:31:08.347552:WT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:219]分配器(GPU_0_bfc)内存不足,试图分配109.02MiB . 调用者表示这不是失败,但可能意味着如果有更多可用内存可能会有性能提升 . 2018-08-08 10:31:08.347941:W T:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:219]分配器(GPU_0_bfc)内存不足,试图分配436.88MiB . 调用者表示这不是失败,但可能意味着如果有更多可用内存可能会有性能提升 . 2018-08-08 10:31:08.496678:W T:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:219]分配器(GPU_0_bfc)内存不足,试图分配384.53MiB . 调用者表示这不是失败,但可能意味着如果有更多可用内存可能会有性能提升 . 2018-08-08 10:31:18.498990:W T:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:275]分配器(GPU_0_bfc)内存不足,试图分配126.62MiB . 目前的分配摘要如下 . 2018-08-08 10:31:18.500375:I T:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:630] Bin(256):Total Chunks:56,正在使用的块:56 . 14.0KiB分配给块 . 箱内使用14.0KiB . 3.1KiB客户端请求在bin中使用 . 2018-08-08 10:31:18.501613:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:630] Bin(512):总块数:1,正在使用的块:1 . 512B已分配为了块 . 512B在bin中使用 . 324B客户端请求在bin中使用 . 2018-08-08 10:31:18.503105:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:630] Bin(1024):Total Chunks:1,正在使用的块:1 . 1.3KiB分配给块 . 1.3KiB在bin中使用 . 在bin中使用1.0KiB客户端请求 . . . . . {故意删除了与上述类似的错误消息} . . 2018-08-08 10:31:18.553579:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:674] 4块大小289538048总计1.08GiB 2018-08-08 10:31:18.553772 :IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:678]总和正在使用的块总数:1.40GiB 2018-08-08 10:31:18.553963:IT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:680]统计:限制:1514350182 InUse:1501699072 MaxInUse:1501699072 NumAllocs:98 MaxAllocSize:289538048

2018-08-08 10:31:18.554294:WT:\ src \ github \ tensorflow \ tensorflow \ core \ common_runtime \ bfc_allocator.cc:279] ****************** ************************************************** ************* 2018-08-08 10:31:18.554562:WT:\ src \ github \ tensorflow \ tensorflow \ core \ framework \ op_kernel.cc:1318]在conv_ops中OP_REQUIRES失败 . cc:693:资源耗尽:OOM在分配张量时使用形状[27,32,196,196]并在/ job上键入float:localhost / replica:0 / task:0 / device:GPU:0 by allocator GPU_0_bfc py_call_impl中的错误(可调用,点$ args,dots $ keywords):ResourceExhaustedError:OOM在分配具有形状的张量[27,32,196,196]并在/ job上键入float:localhost / replica:0 / task:0 / device:GPU:0 by allocator GPU_0_bfc [[Node: conv2d_2 / convolution = Conv2D [T = DT_FLOAT,_class = [“loc:@ training / SGD / gradients / conv2d_2 / convolution_grad / Conv2DBackpropFilter”],data_format =“NCHW”,dilations = [1,1,1,1],填充=“VALID”,strides = [1,1,1,1],use_cudnn_on_gpu = true,_device =“/ job:localhost / replica:0 / task:0 / devi ce:GPU:0“](conv2d_1 / Relu,conv2d_2 / kernel / read)]]提示:如果要在OOM发生时查看已分配的张量列表,请将report_tensor_allocations_upon_oom添加到RunOptions以获取当前分配信息 .

[[Node: loss/mul/_95 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_788_loss/mul", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

提示:如果你哇