I am trying to create a tensor board in Jupyter anaconda the following way. The error occurs when write_images = True, otherwise, this code works fine. Any reason why this happens?
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1,
write_graph = True,
write_images = False
update_freq = 'epoch'
profile_batch = 3,
embeddings_freq=1
)
And I get
UnknownError: Failed to rename: logs\20200219-202538\train\checkpoint.tmp67d5ca45d1404cc584a86cf42d2761d3 to: logs\20200219-202538\train\checkpoint : Access is denied.
; Input/output error
Seems to be random on which epoch it occurs.
I had something similar, it seems like the path where you want to safe the checkpoint, which is referred to tensorbaord is not available or denied. Do you know colab ? I would suggest you copy your code and run your training up there (only if your dataset isnt too large). You can copy your dataset in your Google Drive and access with colab. If it is working in colab, than you may dont have a problem with your code, but propably with your anaconda restrictions.
Mount Google Drive (Colab), Colab basics
I know i couldnt solve your problem, but propably this can help you and boost your learning speed with a juicy free Cloud GPU.
I had the same issue (was running on window's machine) I manually gave full permissions to the folder (right click on folder and edit permission --> give full access to 'everyone' user) and everything went fine.
If you are working on a unix system I think you can try to do the same (chmod 777 <dir_name>).
P.S. be aware of 'full permission' and 'chmod 777'. As now, anyone access to the system can view/edit contents of the folder.
Related
I am trying to solve an optimization problem with PyPSA in Google Colab using Gurobi as solver, which normally works fine on my local computer (but takes too much time). When I try to run this in Google Colab I always get the error of the size-limited license, although I have the correct non-size-limited academic license.
Before trying to run this in Google Colab I followed the steps indicated in the post "Google Colab: Installation and Licensing" (Gurobi) and created an environment to solve my model using the license:
e = gp.Env(empty=True)
e.setParam('WLSACCESSID', 'your wls accessid (string)')
e.setParam('WLSSECRET', 'your wls secret (string)')
e.setParam('LICENSEID', <your license id (integer)>)
e.start()
The problem is that the model to be optimized is inside PyPSA and must not be created by myself with a line like:
model = gp.Model(env=e)
as indicated in the aforementioned post.
So what I need is to find out how to make the PyPSA model run in the correct environment.
I am using this dictionary to specify some parameters for the Gurobi Solver when running the optimization:
solver_options = {'Crossover':0,
'Method' :2,
'BarHomogeneous' :1
}
network.lopf(snapshots = network.snapshots, pyomo = False, solver_name = 'gurobi',
solver_options=solver_options,
extra_functionality = extra_functionalities)
How can I make the PyPSA optimization problem run in the correct environment?
Thank you in advance for your help.
Regards,
Sebastian
I am doing research in deep learning using Tensor flow 2 object detection API. I am getting this error while running the model training. I followed the Gilbert Tanner and Edje Electronics tutorial for basic installations and environment settings. I am using the TensorFlow object detection API new GitHub Commit. I converted the all .protos files into .py files but still facing this error. I am attaching a screenshot of this error. please check this and let me know if you can help.
Thanks in Advance. Error
I had the same problem and also with target_assigner.proto, center_net.proto, you have to add all three .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto
to protoc command. So whole command should be:
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto
I am following "blogdown: Creating Websites with R Markdown" and unfortunately am stuck in Section 1.2. I created a new project in an empty folder, but get this error:
blogdown::new_site()
'C:\Users\rose89\AppData\Roaming\Hugo\hugo.exe" new site ".' is not recognized as an internal or external command, operable program or batch file.
Error in shell(cmd, mustWork = TRUE, intern = intern) :
'"C:\Users\rose89\AppData\Roaming\Hugo\hugo.exe" new site "." --force -f toml' execution failed with error code 1
I have tried removing and re-installing blogdown. When I try File > New Project > New Directory > website using blogdown, I get a popup error: R code execution error
I am using Windows, R studio 1.3.959 and R version 4.0.2. Here is some other info:
getwd()
1 "C:/Users/rose89/Documents/anothernewproject"
list.files('content', '.md$', full.names = TRUE, recursive = TRUE)
character(0)
I prefer to use the first approach in the console, but feel hopeless as I can't even get the "point and click" approach to work. If anyone has suggestions, I would greatly appreciate them! Thank you. Also this is my first post on StackOverflow and first time trying Blogdown, apologies if my question is not phrased clearly!
Update: I believe the error is due to my domain's "group policy" blocking hugo.exe (and other zipped .exe programs) on my Windows machine. I am working with my department's IT department to find a work-around.
I am referring TensorFlow speciliazation from Coursera where a certain piece of code works absolutely fine in Google Colab, whereas when I try to run it locally on PyCharm, it gives following error:
Failed to find data adapter that can handle input
Any suggestions?
Can you tell me the code where the error occurred?
It should be available in logs under your PyCharm console.
Looking at your comments, it seems that the model is expecting an array while you provided a list.
I was facing the same issue. Turns out it was a in the form of a list. I had to convert the fields into a numpy array like:
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
thats it!
Try it out and let me know if it works.
If you look at the Tensorboard dashboard for the cifar10 demo, it shows data for multiple runs. I am having trouble finding a good example showing how to set the graph up to output data in this fashion. I am currently doing something similar to this, but it seems to be combining data from runs and whenever a new run starts I see the warning on the console:
WARNING:root:Found more than one graph event per run.Overwritting the graph with the newest event
The solution turned out to be simple (and probably a bit obvious), but I'll answer anyway. The writer is instantiated like this:
writer = tf.train.SummaryWriter(FLAGS.log_dir, sess.graph_def)
The events for the current run are written to the specified directory. Instead of having a fixed value for the logdir parameter, just set a variable that gets updated for each run and use that as the name of a sub-directory inside the log directory:
writer = tf.train.SummaryWriter('%s/%s' % (FLAGS.log_dir, run_var), sess.graph_def)
Then just specify the root log_dir location when starting tensorboard via the --logdir parameter.
As mentioned in the documentation, you can specify multiple log directories when running tensorboard. Alternatively, you can create multiple run subfolder in the log directory to visualize different plots in the same graph.