SeaweedFS How to set readTimeout argument? - seaweedfs

I have read through the seaweedfs wiki and found that there is an argument that could be helpful in my case of handling large files over unpredictable network connections this argument is -readTimeout= as described in the wiki optimization page under upload large files section, could anyone please guide through how and where can i use this argument.

use "-idleTimeout" option

Related

How to stop the PNG warning: iCCP: known incorrect sRGB profile printing to console?

I am trying to run some code which cannot be altered in functionality, but it would be great if I can somehow ignore the warnings or prevent them from being printed to the console, as it congests it and makes it unreadable. Thank you.
You can try tf.autograph.set_verbosity it's used to control how much info tensorflow logs as indicated in their docs for TF2.6
tf.autograph.set_verbosity(
level=0, alsologtostdout=False
)
Check also the answers here which covers solutions for multiple versions of tensorflow and python.

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

How do I emulate WebGL stuff with OESMesa on AWS Lambda?

I would like to take a screenshot of a web site using WebGL. I don't have to use GPU to open that site. Using emulation is enough for me.
At the beginning, I already tried headless-chrome to do this. That can take screenshot of ordinal web sites. But, It not works for WebGL canvases.
I think one of possibility is using OSMesa or something to emulate OpenGL.
I have used all of my strategy for overcoming this. Is this actually possible to do?
If yes, please tell me how to do that. If no, I would like to know why.
Thanks.
Yes it is possible!
You need the right combination of:
headless-chromium binary
libosmesa.os binary (in same directory)
launch chrome headless with the right flags, such as (see link for full details): ['--use-gl=osmesa', '--enable-webgl', '--ignore-gpu-blacklist', '--homedir=/tmp', '--single-process', '--data-path=/tmp/data-path', '--disk-cache-dir=/tmp/cache-dir']
This thread on the serverless-chrome github project discusses the issue and provides some binaries which I have used to capture screenshots of WebGL content on AWS Lambda using Page.captureScreenshot().
https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-416494572
(See comment by #apalchys on 28th August)
This particular example uses SwiftShader which seems to be the preferred option going forward.
Note, however, that I was unable to create PDFs using Page.printToPDF() using this version - WebGL content just appears blank/white. However, I was able to also get Page.printToPDF() using an earlier version which uses osmesa, see https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-371199530

Is local_variables_initializer really necessary?

In practice, isn't running global_variables_initializer enough to initialize model variables?
local_variables_initializer seems to be unnecessary and absent even in official and semi-official tensorflow example code. See for example:
https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial
https://www.tensorflow.org/get_started/mnist/pros
In both cases only global_variables_initializer is used.
Am I missing something here? Is there any case where I should call local_variables_initializer explicitly?
local_variables_initializer is useful in particular for streaming metrics (e.g. tf.contrib.metrics.streaming_auc). As said in the doc of contrib.metrics:
Because the streaming metrics use local variables, the Initialization stage is performed by running the op returned by tf.local_variables_initializer().

How Does WinHttpCrackurl Work

The MSDN documentation on WinHttpCrackurl makes a point of saying it runs "synchronously". But, could someone provide some authoritative documentation that says WinHttpCrackurl does or does not attempt a network connection? I've searched fruitlessly and would like to avoid disassembling the function. Thanks.
WinHttpCrackUrl() DOES NOT use the network in any way. There is no documentation that states that, but all it does is parses the input string into its individual components, nothing more, so there is no need for the network.