I have written an algorithm using Tensorflow. The structure of my code is as follows:
Read data from a csv and store it in a list of lists, where each list contains a single line from a csv.
Use the
feed_dictapproach to feed the graph a single line of data. This is done in a loop till all the lines are processed.
The TF graph is executed on the GPU. My question is related to the data transfer which will happen from the CPU to the GPU. Does using feed_dict mean that there will be lots of small transfers from the host to the device? If yes, would it be feasible to do a bulk transfer using feed_dict and use a loop in the TF graph?