neroxs.blogg.se

Tf image resize 3d
Tf image resize 3d








tf image resize 3d
  1. #Tf image resize 3d how to#
  2. #Tf image resize 3d update#
  3. #Tf image resize 3d code#
  4. #Tf image resize 3d crack#

Reorder_dimensions = tf.Description: How to optimally learn representations of images for a given resolution.

tf image resize 3d

# tf.nn.conf3d( expand_it_all, averager, padding="SAME") # do a conv layer here to 'blend' neighbor values like: # - removing this section because the requirements changed Transpose_to_align_neighbors = tf.transpose( prepare_for_transpose, )Įxpand_it_all = tf.reshape( transpose_to_align_neighbors, ) Prepare_for_transpose = tf.reshape( expanded_it, )

#Tf image resize 3d code#

Here is the tensorflow code isolate = tf.transpose(yourdata,) # įlatten_it_all = tf.reshape() # flatten itĮxpanded_it = flatten_it_all * tf.ones( ) We'll want to reshape this matrix, and expand it similar how a 3d matrix is expanded in numpy like this a = np.array(,])Ī.reshape().dot(np.ones()).reshape().transpose().reshape() Last addition to add address scaling the depth also If after all this resizing of the image you want it to again be in the shape : then simple use this code Where size is a 2D tensor if, or in your case ReshapedData = tf.image.resize_images( tf.reshape(yourData, ), new_size ) Okay, here is how to scale the image, use tf.image.resize_images after resizing like so: The convolution layer will process your stack of color frames as easily as your monochrome frames (albeit with more computing power and parameters). Your tensorflow no doubt has a convolution layer immediately after the input. The reshape will stack the color frames one after the other.

tf image resize 3d

How could this work? Let's imagine that depth is the number of image frames and n is the color depth (possibly 3 for RGB). It is the same solution, but now you can't use squeeze and instead are just left with reshape like so: You would like to sometimes use the dimension

#Tf image resize 3d update#

Update to include the changing 4th dimension Personally, I'd use squeeze to declare to the next programmer that your code only intends to get rid of dimensions of size 1 whereas reshape could me so much more and would leave the next dev having to try to figure out why you are reshaping. If you have the dimensions handy and want to use the reshape capability of tensorflow instead you could like so : Which is what tensorflow will uses gracefully. Then use the squeeze function to remove to unnecessary final dimension like so: If you are looking to process a 3D image and have batches of them in this configuration Here we want to resize the 3-d image to dimensions of (50,60,70)Ī tensor is already 4D, with 1D allocated to 'batch_size' and the other 3D allocated for width, height, depth. Where x will be the 3-d tensor either grayscale or RGB resized_along_width is the final resized tensor. Resized_along_width = resize_by_axis(resized_along_depth,50,70,1,True) Resized_along_depth = resize_by_axis(x,50,60,2, True) Stack_img = tf.stack(resized_list, axis=ax) Unstack_img_depth_list = tf.unstack(image, axis = ax) Stack_img = tf.squeeze(tf.stack(resized_list, axis=ax)) Resized_list.append(tf.image.resize_images(i, ,method=0)) My approach to this would be to resize the image along two axis, in the code I paste below, I resample along depth and then width def resize_by_axis(image, dim_1, dim_2, ax, is_grayscale):

#Tf image resize 3d crack#

Hars answer works correctly, however I would like to know whats wrong with mine if anyone can crack it. I believe nearest neighbour shouldn't have the stair-casing effect (I intentionally removed the colour). Return tf.transpose(tf.reshape(rsz2,, shape*depth_factor, shape*height_factor, shape*width_factor, shape]), ) Rsz2 = tf.image.resize_images(tf.reshape(tf.transpose(tf.reshape(rsz1,, shape*width_factor, shape*height_factor, shape, shape]), ),, shape, shape*height_factor, shape*width_factor*shape]), *depth_factor, shape*height_factor]) Rsz1 = tf.image.resize_images(tf.reshape(input_layer,, shape, shape, shape*shape]), *width_factor, shape*height_factor]) I have come up with this: def resize3D(self, input_layer, width_factor, height_factor, depth_factor): Would use of tf.while_loop to go over all the values dramatically decrease performance? i feel like it would on GPU unless the have some kind of automatic loop parallelisation. Simple nearest neighbour should be fine.Īny ideas? It's not ideal, but I could settle for the case where the data is just 0 or 1 and use something like: tf.where(boolMap, tf.fill(data_im*2, 0), tf.fill(data_im*2), 1)īut I'm not sure how to get boolMap. I was thinking I could try and run tf.image.resize_images on it in a loop and swap axes, but I thought there must be an easier way. I need to resize some 3D data, like in the tf.image.resize_images method for 2d data.










Tf image resize 3d