how can I free pedestrians from a waiting area according to train arrival schedule in Anylogic? - schedule

I have train station model, and have train arrival schedule. I want pedestrians to wait in the waiting area until the train arrives and the passengers on the train get off the train first. I tried to use 'hold', event, and free() function call and release pedestrians from the waiting area by using a counter of pedestrians getting off the train and get on the escalator
*if the number of passengers getting off the train is equal to the passengers getting on the escalator then the counter will be equal to zero and this satisfy the condition where pedestrians in the waiting area should be released to get on the train.
if ( counter == 0 ) {
hold . unblock();
}
else {
hold. block();
}
however, it did not work. It either hold the pedestrians for ever or ignore the script and head to train even if the train has not arrived yet.
can you please help me ?

I will assume you are use a ped wait block in order to build your logic... if you want to control the waiting time, you need to use in the properties delay ends on free() function call... in which case the hold is not needed.
Then you can do the following instead:
if ( counter == 0 ) {
pedWait.freeAll();
}

Related

Tensorflow: how to stop small values leaking through pruning?

The documentation for PolynomialDecay suggests that by default, frequency=100 so that pruning is only applied every 100 steps. This presumably means that the parameters which are pruned to 0 will drift away from 0 during the other 99/100 steps. So at the end of the pruning process, unless you are careful to have an exact multiple of 100 steps, you well end up with a model that is not perfectly pruned but which has a large number of near-zero values.
How does one stop this happening? Do you have to tweak frequency to be a divisor of the number of steps? I can't find any code samples that do that...
As per this example in the doc: while training the tfmot.sparsity.keras.UpdatePruningStep() callback must be registered:
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
…
]
model_for_pruning.fit(…, callbacks=callbacks)
This will ensure that the mask is applied (and so weights set to zero) when the training ends.
https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/python/core/sparsity/keras/pruning_callbacks.py#L64

Shuffling the training dataset with Tensorflow object detection api

I'm working on a logo detection algorithm using the Faster-RCNN model with the Tensorflow object detection api.
My dataset is alphabetically ordered (so there are a hundred adidas logo, then hundred apple logo etc.). And i would like it to be shuffled while training.
I've put some values in the config file:
train_input_reader:{
shuffle: true
queue_capacity: some value
min_after_dequeue : some other value}
However whatever are the values, I'm putting in, algorithm is at first training on all of the a's logos (adidas, apple and so on) and only a lapse of time after starting to see the b's logos (bmw etc.) and the c's one etc.
Of course I could just shuffle my input dataset directly, but I would like to understand the logic behind it.
PS: I've seen this post about shuffling and min_after_dequeue, but I still dont quite get it. My batch size is 1 so it shouldn't be using tf.train.shuffle_batch() but only tf.RandomShuffleQueue
My training dataset size is 5000 and if I write min_after_dequeue: 4000 or 5000 it is still not shuffled right. Why though?
Update:
#AllenLavoie It's a bit hard for me; as there is a lot of dependencies and I'm new to Tensorflow.
But in the end the queue is constructed by
tf.contrib.slim.parallel_reader.parallel_read( _, string_tensor = parallel_reader.parallel_read(
config.input_path,
reader_class=tf.TFRecordReader,
num_epochs=(input_reader_config.num_epochs
if input_reader_config.num_epochs else None),
num_readers=input_reader_config.num_readers,
shuffle=input_reader_config.shuffle,
dtypes=[tf.string, tf.string],
capacity=input_reader_config.queue_capacity,
min_after_dequeue=input_reader_config.min_after_dequeue)
It seems that when I'm putting num_readers = 1 in the config file the dataset is finally shuffling as I want, (at least in the beginning), but when there are more somehow on the start the logos are getting in the alphabetical order.
I recommend shuffling the dataset prior to training. The way shuffling currently happens is imperfect and my guess at what is happening is that at the beginning the queue starts off empty and only gets examples that start with 'A' --- after a while it may be more shuffled, but there is no getting around the beginning part when the queue hasn't been filled yet.

State resetting in LSTMs during training and testing

I am trying to understand and implement LSTMs. I understand that they one needs to define a sequence length T, and the training is performed in batches. So we fed to the network several sequences of length T. Now the LSTM needs a previous state as input, which as I understand, it is initialized to zero. My question is, is the state reset to zero after every sequence? for example I have a sequence 1, the state vector is carried forward in this sequence, and then I put it to zero for the next sequence? Or is it carried it to the next sequence 2? If so, How is it done for unrelated sequences; for example I have samples from 2 different texts, and It would not make sense to carry the state from text 1 to text 2; how is this handled in practice?
About the testing time, the state vector is initialized as zero and carried for the whole sequence, or is it reset after each sub-sequence?
Note: I put tag this also in Tensorflow since is the framework that I am using, and maybe someone from there can help me also.
In Tensorflow, I am 95% sure the starting state for every sequence is reset to zero for every element in your batch and between batches. (5% because the "Never say never" rule :)
EDIT:
I should probably elaborate more. How Tensorflow works, it first constructs a graph and then pushes your data through. When you look at the recurrent graph you constructed, I believe that you'll see its head (first state) is connected to zero, which means every time you push data through the graph (e.g. through sess.run()), it will get a new zero from the zero generator, hence its old state from previous runs if forgotten.
It depends on how you implement batch processing of RNN.
Theoretically, the state is very important in processing the series of the data so, you should not reset the state if the whole sequence of data is not finished.
So, usually you need to reset when the epoch finishes and you should not reset when just one batch cycle finishes.

Tensorflow--how to limit epochs with evaluation only?

Given that I train a model; save it off with metagraph/save.Saver, and the load that graph into a new script/process to test against test data, what is the best way to make sure I only iterate over the test data once?
With my training data, I want to be able to iterate over the entire data set for an arbitrary number of iterations. I use
tf.train.string_input_producer()
to drive a queue of loading files for training, so I can safely leave num_epochs as default (=None) and let other controls drive training termination.
However, when I run the graph for evaluation, I just want to the evaluate the test set once (and gather the appropriate statistics).
Initial attempted solution:
Make a tensor for Epochs, and pass that into tf.train.string_input_producer, and then tf.Assign it to the appropriate value based on test/train.
But:
tf.train.string_input_producer only takes integers as num_epochs, so this isn't possible...unless I'm missing something.
Further notes: I use
tf.train.batch()
to read-in test/train data that has been serialized into protocol buffers (https://www.tensorflow.org/versions/r0.11/how_tos/reading_data/index.html#file-formats), so I have minimal visibility into how the data is loaded and how far along it is.
tf.train.batch apparently will throw tf.errors.OutOfRangeError, but I'm not clear how to catch that successfully, or if that is even what I really want to do. I tried a very naive
try...except...finally
(like in https://www.tensorflow.org/versions/r0.11/how_tos/reading_data/index.html#creating-threads-to-prefetch-using-queuerunner-objects), which didn't catch the error from tf.train.batch.

Why sometimes tensorflow runs slower and slower with the process of training?

I train a RNN network, the first epoch used 7.5 hours. But with the training process runs, tensorflow runs slower and slower, the second epoch used 55 hours. I checked the code, most APIs that become slower with time are these :
session.run([var1, var1, ...], feed_dict=feed),
tensor.eval(feed_dict=feed).
For example, one line code is session.run[var1, var2, ...], feed_dict=feed), as the program begins, It uses 0.1 seconds, but with the process runs, the time used for this line of code becomes bigger and bigger, After 10 hours, time this line spends comes to 10 seconds.
I have been befall this several times. Which triggered this? How could I do to avoid this?
If this line of code: self.shapes = [numpy.zeros(g[1].get_shape(), numy.float32) for g in self.compute_gradients] adds nodes to the graph of tensorflow? I suspect this maybe the reason. This line of code will be called many times periodically,and self is not an object of tf.train.optimizer.
Try finalizing your graph after you create it (graph.finalize()). This will prevent operations to be added to the graph. I also think self.compute_gradients is adding operations to the graph. Try defining the operation outside your loop and running it inside your loop
I had a similar issue. My solution was putting
tf.reset_default_graph()
after each epoch or sample. This resets the graph and frees up all the resources used in a way closing the session does not.