Load Repast Model from freeze dry - repast-simphony

A simple question. I have model which is very costly to initialize but much lighter to run. There is the option to freeze dry the model after initialisation in the gui. However, I could not figure out how to load the freezed dry model in the gui or in the batch gui. Any hints are appreciated. Thanks
I freezed dry the initalized model but could not figure a way to load the model state

If you right click on the Data Loaders element in the Scenario Tree and "set data loader", you should see an option for loading from the freeze dried model. I think the XML format is the one you want.

Related

Is there an unsupervised Tensorflow/Pytorch model that can detect objects that keep on changing?

I've been looking for a way to detect objects that keep changing through a series of images.
Since there could be a lot of lighting conditions, a basic image processing script wouldn't do it.
I'm looking for a way to have a model trained on images taken during intervals, and later it outputs a mask with the location of the objects that keep changing.
I've searched for unsupervised deep learning models that do this or similar tasks, but I found nothing.
Is this a problem that can be solved with the use of AI, or is there a better solution for this problem?
The following image clarifies the wanted pipeline:

How to transfer learning or fine tune YOLOv4-darknet with freeze some layers?

I'm a beginner in object detection field.
First, I followed YOLOv4 custom-train from here, I have successfully followed the tutorial. Then I started to think that if I have a new task which is similar to YOLOv4 pre-trained (which using COCO 80 classes) and I have only small dataset size, then I think it would be great if I can fine tune the model (unfreeze only the last layer) to keep or even to increase the detector performance by using only small & similar dataset. This reference seems to legitimate my thought about the fine-tuning I wanted to do.
Then I go to Alexey github here to check how to freeze layers, and found that I should use stopbackward=1. It says that
"...set param stopbackward=1 for layer-136 in cfg-file"
But I have no idea about where is "layer-136" in the cfg-file here and also I have no idea where to put stopbackward=1 if I only want to unfreeze the last layer (with freezing all the other layers). So to summarize my questions.
Where (in which line) to put stopbackward=1 in the yolov4-custom.cfg if I want to unfreeze last layer and freeze the other layers?
What is "layer-136" which mentioned in Alexey github reference? (is it one of the classifier layer? or else?)
In which line of yolov4-custom.cfg should I put the stopbackward=1 for that layer-136?
Any further information from you is really appreciated. Please advise.
Thank you in advance.
Regards,
Sona
the "layer-136" is located before the head of yolov4. To make it easy to see, try to visualize the .cfg file to Netron apps and read the .cfg via text editor, so you can understand the location of layer. You can notice the input and output (the x-layer) when you analyze it with Netron

Continue training CoreML Model

I'm trying to get a better understanding on how to create object detection models in Turi Create (for usage in CoreML). I'm trying to create a model that detects custom images I designed and printed myself. To avoid having to take a huge amount of photo's, I'm figured I'd use the one-shot-object-detection feature provided by Turi Create. So far so good. I feed the algorithm two starter images and it successfully generates the synthetic data set and creates a somewhat reliable model.
Now I'm wondering what happens when I want to add a third category. I could of course add a third starter image and run the code again, but this feels like 2/3th of the work is redundant...
Is there a way to continue training a previously trained model, or combine multiple models so I don't have to retrain my models from scratch every time I add a category? If not, any other ways to get this done (e.g. TensorFlow)?
Turi Create is rather limited in the options it offers for retraining (none, basically). If you want more control over the process, using a tool such as TensorFlow is the better choice.

Does model.save_weights include optimizer state?

If yes, then, how did they do that? I mean, say that I have a custom-built model via subclassing. My optimizer is a separated object. How is it that one command save weights of two different objects? In particular, how does it know that those two objects are related? Is it due to magic done by model.compile?
EDIT: I just realized that a model has an attribute model.optimizer, is that how Keras do it? make the optimizer an attribute of a model and will be saved with it?
No, model.save_weights() only saves the model's parameters. The state of the stuffs that the model was compiled with (optimizers, callbacks, losses, metrics) will not be saved.
You should use model.save() to save the model's optimizer and other training configurations. Please refer to this documentation (the most straigtforward way to save the optimizer together with the model).
If you for some reason want to use exactly model.save_weights(), please refer to this stackoverflow question on how to save model's optimizer (might be a bit tricky).

Visualizing the detection process in Mask-RCNN

I am working on a project that aims to detect objects in certain difficult circumstances. I ran a test with Mask_RCNN on a dataset that contains that specific type of difficult examples and it did a pretty good job in some of them.
But some other examples didn't get detected surprisingly, when there is no obvious reason. To understand the reason behind this performance difference, I've been adviced to use Tensorboard. But then I realized that its mostly used for training phase, as I understood from this video.
At the end of the video, however, they mention about an integration project of Tensorboard, namely the Tensorflow Debugger Integration. But unfortunately I could not find further information regarding the continuation about that feature.
Is there any way to visualize weights and activation maps inside a CNN during inference/evaluation phase?
The main difference between training and inference time for tensorboard will be the global_step value. Most graphs display global step as the x-axis. You can supply your own global step counter if you like, but you'll have to decide what the x-axis should represent to you in this case since "time" isn't really a logical construct during inference. Other tabs such as the images tab don't have a time component, so using them should be the same as during training.
The tensorflow debugger is a nice terminal debugger, but wouldn't really be related to what you're trying to do here. It's certainly not a visualization tool.
Another approach might be to simply generate your own plots and output a set of PDFs with the various visualizations you need using standard tools like matplotlib for each test image. I've found tools like XnView make it really easy to look through a lot of PDF visualizations to understand what's going on. I've used this approach quite effectively. If you want to view many hundreds or thousands of results quickly you might have an easier time if all the visuals are just dumped out to a directory.