Saving Evaluation to a file while using TF OD API - tensorflow

The evaluation parameters look like this:
https://github.com/armaanpriyadarshan/Training-a-Custom-TensorFlow-2.X-Object-Detector/blob/master/doc/evaluation.png
But the thing is, all these are displayed only on the CMD, is there any way to store them in files?
My target is to get a dictionary like this {"model_name" : evaluation_parameters}
Tensorboard is a way out, but again, I want to automate this process in order to get the best model to display everything at once.
So, any ideas or suggestions are welcome!

Related

How to check the contents of postgres

I'm running tests with Matchstick and my save() calls don't seem to be working (I set up my tests by saving some entities, but then my application code doesn't see them when it goes to load).
Is there any way to check the current state of the backend and see what's in there? Mainly just trying to troubleshoot.
Turns out, you just have to read the docs
https://thegraph.com/docs/en/developer/matchstick/
logStore()

Asp .Net Core Controller Task timing out

I have an ASP .NET Core 3.1 website that imports data. Prior to importing, I want to run some validation on the import. So the user selects the file to import, clicks 'Validate', then either gets some validation error messages so they can fix the import file, or allows them to import.
The problem I am running into is around the length of time these validation and import processes take. If the file is small, everything works as expected. If the file is larger (Over 1000) records the validation and/or may take several minutes. On my local machine, or a server on my network, this works fine. On my actual public facing website, I am getting:
503 first byte timeout
So, I need some strategies for getting around this. Turning up the timeout time seems like a rabbit hole? It looks like BackgroundService/IHostedService is probably the best way to go? But I cant seem to find an example of how to do this in the way I would like:
Call "Validate" via AJAX
Turn on a loader
Perform validation
Turn off loader
Display either success or list of errors to user
##UPDATE## -
Validation -
Ajax call to controller
Controller calls Business Logic code
a. Check file extension
b. check file size
c. Read in .csv with CsvHelper
d. Check that all required columns are present
c. Check that required columns contain valid data - length, no whitespace, valid zip code, valid phone, etc...
d. Check for internal duplicates
e. If append (as opposed to overwrite) check for duplicates in database - this is the slow step
So, is a better solution to speed up the validation process? Is BackgroundService overkill?

How to use one scenario output to another scenario without using properties files

I am working on API testing project. My requirement is to use response of one API as a response of another. I need different Feature files for each API. The challenge was to use output of one API as input to another which in my case is output of one feature file as input of another.
Also i don't want to call one feature file in another. So to achieve this currently we are using Runner class to initiate the test and using Properties file to store the responses. In the same run we are reading these properties file which act as input to another API(Feature file).
Is there any other better way to do this since we are not willing to use properties file in the framework.
Thanks
I think you are over-complicating your tests. My advice is combine the 2 calls into one scenario. Else there is no way unless you call a second feature file.

recommended way of profiling distributed tensorflow

Currently, I am using tensorflow estimator API to train my tf model. I am using distributed training that is almost 20-50 workers and 5-30 parameter servers based on the training data size. Since I do not have access to the session, I cannot use run metadata a=with full trace to look at the chrome trace. I see there are two other approaches :
1) tf.profiler.profile
2) tf.train.profilerhook
I am specifically using
tf.estimator.train_and_evaluate(estimator, train_spec, test_spec)
where my estimator is a prebuilt estimator.
Can someone give me some guidance (concrete code samples and code pointers will be really helpful since I am very new to tensorflow) what is the recommended way to profile estimator? Are the 2 approaches getting some different information or serve the same purpose? Also is one recommended over another?
There are two things you can try:
ProfilerContext
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/profiler/profile_context.py
Example usage:
with tf.contrib.tfprof.ProfileContext('/tmp/train_dir') as pctx:
train_loop()
ProfilerService
https://www.tensorflow.org/tensorboard/r2/tensorboard_profiling_keras
You can start a ProfilerServer via tf.python.eager.profiler.start_profiler_server(port) on all workers and parameter servers. And use TensorBoard to capture profile.
Note that this is a very new feature, you may want to use tf-nightly.
Tensorflow have recently added a way to sample multiple workers.
Please have a look at the API:
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/client/trace?version=nightly
The parameter of the above API which is important in this context is :
service_addr: A comma delimited string of gRPC addresses of the
workers to profile. e.g. service_addr='grpc://localhost:6009'
service_addr='grpc://10.0.0.2:8466,grpc://10.0.0.3:8466'
service_addr='grpc://localhost:12345,grpc://localhost:23456'
Also, please look at the API,
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/ProfilerOptions?version=nightly
The parameter of the above API which is important in this context is :
delay_ms: Requests for all hosts to start profiling at a timestamp
that is delay_ms away from the current time. delay_ms is in
milliseconds. If zero, each host will start profiling immediately upon
receiving the request. Default value is None, allowing the profiler
guess the best value.

Slow results when using webkitSpeechRecognition vs x-webkit-speech?

I'm new to using this API and wasnt able to find an answer to what I'm running into.
When I use new webkitSpeechRecognition, and use the onresult event to find isFinal == true, it seems to take longer in finding the final result than using x-webkit-speech in an input tag.
Does anyone know if google is doing something specific to get a speedier result? Or do I need to set an attribute in the webkitSpeechRecognition object?
Thanks for any insight!
See my answer which explains how, in continuous mode results are triggered by new voice input, or otherwise will show up only after a timeout.
In non continuous mode, the result will show up much faster.