Gimp - Easy way to make many layers visible? - layer

In Gimp, I've created a .xcf file that consists of some 200 layers. Some are visible and some not. Now I want to create a picture that consists of all layers, so I have to make all layers visible. Later I'll have to return to the state where some layers are visible and some not. How can I achieve this without clicking several hundred clickboxes for visibility?

Shift+Click on the eye icon (eycon?) of a layer in the layers dialog, or the place where it should be, if the layer is currently invisible.
This will:
make the layer you are clicking visible
make all other layers invisible by the first click, and visible by the next click
See http://docs.gimp.org/2.8/en/gimp-dialogs-structure.html#gimp-layer-dialog
To get back to the previous state, I'd use File->Revert, this discards any changes and reloads the file from disk
But...
... this is Stack Overflow, so we need to do this in code...
I'd suggest to use the Python console in GIMP, Filters->Python-Fu->Console. Assuming the image is the only one you're working on, the following code sets all of its layers to be visible:
pdb.gimp_image_undo_group_start(gimp.image_list()[0])
for layer in gimp.image_list()[0].layers:
layer.visible = True
pdb.gimp_image_undo_group_end(gimp.image_list()[0])
The code's main part is a loop over all layers of the image, setting them to visible. The loop is wrapped into an undo group, allowing for easy undo of all visiblity changes in one single step.
But... Layer groups?
Yes, we're not quite there yet.
If your image uses layer groups, you will notice that the above code will make any layer not in a group and the groups themselves visible, but it won't affect any layer in a group.
We can tell whether a layer we encounter in that for loop is a layer group - pdb.gimp_item_is_group(layer) will return true for those. So while iterating, we could check if the current item is a group, and start iterating over its children.
Python has nifty way for filtering lists (and gimp.Image.layers is one) by an arbitrary boolean filter-expression, and we got one of those, see above.
So instead of complicating our current loop with additional if statements, we can do this:
pdb.gimp_image_undo_group_start(gimp.image_list()[0])
# iterate layer groups
for group in [group for group in gimp.image_list()[0].layers if pdb.gimp_item_is_group(group)]:
# you want a group.name check here to pick a specific group
for layer in group.layers:
layer.visible = True
# iterate non-group layers
for layer in gimp.image_list()[0].layers:
layer.visible = True
pdb.gimp_image_undo_group_end(gimp.image_list()[0])
But... Nested layer groups?
Yes, still not quite there - if you have nested layer groups. The code just above only gets into the first level of groups, and won't affect any layer in a deeply nested group structure.
This is where a recursive procedure will be more useful than iterative loops, so stay tuned for an additional update.

Related

How to freeze one filter in a layer while keeping other filters trainable?

Suppose that the weight matrix for one layer is [32,64,4,2]. Is it possible to freeze its first filter while keeping the other 31 filters trainable?
I've tried to set requires_grad, however this parameter is for the whole layer.
It is possible, but not as straightforward as you would think. What nn.Conv2d effectively does is it initializes and owns the weight (and bias, if applicable) parameters and then in forward it just dispatches to functional.conv2d.
In order to achieve your goal, you will need to create a class which holds the frozen filter as a buffer (non-parameter) and the 31 remaining filters as a parameter. Then, in forward, it will just concatenate the buffer and the parameter to obtain a 32-channel filter and dispatch to functional.conv2d.

Background images in one class object detection

When training a single class object detector in Tensorflow, I am trying to pass instances of images where no signal object exists, such that the model doesn't learn that every image contains at least one instance of that class. E.g. if my signal were cats, id want to pass pictures of other animals/landscapes as background -this could also reduce false positives.
I can see that a class id is reserved in the object detection API (0) for background, but I am unsure how to code this into the TFrecords for my background images - class could be 0 but what would be the bounding box coords? Or do i need a simpler classifier on top of this model to detect if there is a signal in the image or not, prior to detecting position?
Later approach of simple classifier, makes sense. I don't think there is a way to do the first part. You can use check on confidence score as well apart from checking the object is present.
It is good practice to create a dataset with not objects of interest, for the same you need to use the same tools (like - label img) that you have used for adding the boxes, image with no BB wil have xml files with no details of BB but only details of the image. The script create tf record will create the tf record from the xml files, look at the below links for more inforamtion -
Create tf record example -
https://github.com/tensorflow/models/blob/master/research/object_detection/dataset_tools/create_pet_tf_record.py
Using your own dataset-
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/using_your_own_dataset.md

Is the object location in train effect the results for Faster RCNN?

Has enyone try the effect of the location per class in faster rcnn?
In case my train data has one of the object classes always in one area of the frame, lets say in the top right of the image, and on the evaluation dataset I have one image that this object is on other area, down left,
Is the Faster RCNN capable to handle with this case?
Or if I want my network to find all of the classes in all of the frame areas I need to provide example in the train dataset that cover all the areas?
Quoting faster-RCNN paper:
An important property of our approach is that it is
translation invariant, both in terms of the anchors and the
functions that compute proposals relative to the anchors. If
one translates an object in an image, the proposal should
translate and the same function should be able to predict the
proposal in either location. This translation-invariant property
is guaranteed by our method*
*As is the case of FCNs [7], our network is translation invariant up to the network’s total stride
So the short answer is that you'll probably be ok with the object is mostly at a certain location in the train set and somewhere else in the test set.
A bit longer answer is that the location may have side affects that may affect the accuracy and it will probably be better to have the object in different locations; however you can try to add - for testing purposes - N test samples to the train set and see what is the accuracy change in the test set -N remaining samples.

Is Tensorflow RNN implements Elman network fully?

Q: Is Tensorflow RNN implemented to ouput Elman Network's hidden state?
cells = tf.contrib.rnn.BasicRNNCell(4)
outputs, state = tf.nn.dynamic_rnn(cell=cells, etc...)
I'm quiet new to TF's RNN and curious about meaning of outputs, and state.
I'm following stanford's tensorflow tutorial but there seems no detailed explanation so I'm asking here.
After testing, I think state is hidden state after sequence calculation and outputs is array of hidden states after each time steps.
so I want to make it clear. outputs and state are just hidden state vectors so to fully implement Elman network, I have to make V matrix in the picture and do matrix multiplication again. am I correct?
I believe you are asking what the output of a intermediate state and output is.
From what I understand, the state would be intermediate output after a convolution / sequence calculation and is hidden, so your understanding is in the right direction.
Output may vary as how you decide to implement your network model, but on a general basis, it is an array where any operation (convolution, sequence calc etc) has been applied after which activation & downsampling/pooling has been applied, to concentrate on identifiable features across that layer.
From Colah's blog ( http://colah.github.io/posts/2015-08-Understanding-LSTMs/ ):
Finally, we need to decide what we’re going to output. This output will be based on our cell state, but will be a filtered version. First, we run a sigmoid layer which decides what parts of the cell state we’re going to output. Then, we put the cell state through tanhtanh (to push the values to be between −1−1 and 11) and multiply it by the output of the sigmoid gate, so that we only output the parts we decided to.
For the language model example, since it just saw a subject, it might want to output information relevant to a verb, in case that’s what is coming next. For example, it might output whether the subject is singular or plural, so that we know what form a verb should be conjugated into if that’s what follows next.
Hope this helps.
Thank you

Remove data from tensorboard event files to make them smaller

When I train a model for multiple days with image summary activated, my .tfevent files are huge ( > 70GiB).
I don't want to deactivate the image summary as it allows me to visualize the progress of the network during training. However, once the network is trained, I don't need those information anymore (in fact, I'm not even sure it is possible to visualize previous images with tensorboard).
I would like to be able to remove them from the event file without loosing other information like the loss curve (as it is useful to compare models together).
The solution would be to use two separate summary (one for the images and one for the loss) but I would like to know if there is a better way.
It is sure better to save the big summaries less often as Terry has suggested, but in case you already have an event file which is huge, you can still reduce its size by deleting some of the summaries.
I have had this issue, where I have saved a lot of image summaries, which I don't need now, so I have written a script to copy the eventfile, while only leaving the scalar summaries:
https://gist.github.com/serycjon/c9ad58ecc3176d87c49b69b598f4d6c6
The important stuff is:
for event in tf.train.summary_iterator(event_file_path):
event_type = event.WhichOneof('what')
if event_type != 'summary':
writer.add_event(event)
else:
wall_time = event.wall_time
step = event.step
# possible types: simple_value, image, histo, audio
filtered_values = [value for value in event.summary.value if value.HasField('simple_value')]
summary = tf.Summary(value=filtered_values)
filtered_event = tf.summary.Event(summary=summary,
wall_time=wall_time,
step=step)
writer.add_event(filtered_event)
you can use this as a base for more complicated stuff, like leaving only every 100-th image summary, filtering based on summary tag, etc.
If you look at the event types in the log using #serycjon's loop you'll see that the graph_def and meta_graph_def might be saved often.
I had 46 GB worth of logs that I reduced to 1.6 GB by removing all the graphs. You can leave one graph so that you can still view it in tensorboard.
Just handled this problem, hoping this is not too late.
My slolution is to save your image summary every 100(or other value) training steps, then the growth speed of the .tfevent's file size will be slow down, eventually the file size will be much smaller.