ResourceExhaustedError When running network demo fourth try - tensorflow

I have 1600 videos and I want to make joint annotation label data about videos.
I've already made the open pose network and I put my videos as input of the network and saved the joint data as json file.
When I put my first video data as input, there are no errors. And when I put second, third video as input, there are no errors too.
But When I put the fourth video data as input, I got these error message.
enter image description here
enter image description here
these above images are the error message.(OOM)
The size of first, second, third, fourth video is the same.
When I change name first and fourth video name, I got the same error when putting fourth video.
I think this error is about the graph. but I couldn't know why exactly.
I think there are many genious on stackoverflow. So please answer my question... :)

I solve this problem by using cpu. not using gpu.
I use cpu only in tensorflow for solving this problem. and it works!

Related

How to pick the right images for an object detection model for only 1 tag?

UseCase: I'm trying to extract certain parts of a screenshot which is taken from a game (with a tf object detection model) and extract the text within this part (custom model for the font used in the game).
I have trained a custom model based on SSD Mobilenet V2 and the object detection works quite okish, but sometimes the bounding box is off. I googled about selecting the right images and the right amount for training the custom model, but I couldn't find a good hint in the right direction.
I try to extract the following (surrounded by red):
The environmen can change:
Resolution of the game can be different (1920x1080, WHQD etc.)
Text in the box is not always the same
I have trained with 120 self made images (1920x1080) (90% for training 10% for test) (all of these images where a screenshot of the game) and as I mentioned the results are okish. Sometimes the detected area is off (cutting the content of the box or including a lot area of the box surroundings).
Maybe someone can help me/answering the following questions:
Could a bigger training dataset increase the accuracy?
Should I also take different resolutions into account when creating the training data?
Would it make sense to feed only the boxes without the rest of the game screenshot into the training? Or should I mix screenshots of the whole game and only box screenshots?
Thank you in advance ! :)

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

How to send a texture with Agora Video SDK for Unity

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

I want to add the spp algorithm to my program so that the input and output images don't have to be fixed

https://github.com/nickliqian/cnn_captcha
Here is the git address, which is the code I am using.
Hello, I have a verification code recognition program. Now I can model the model and verify the recognition result, but I can only fix the image size of the input and output. Now I want to add the spp(Spatial Pyramid Pooling) algorithm into my program. I have tried many times and can't solve it normally. Can you help me? Thank you.
https://github.com/nickliqian/cnn_captcha
Let the input and output pictures not be in fixed size

AviSynth AvsP disolve not working

I'm trying to make a 30sec video of images and can't seem to find useful information on how to achieve this if someone could point me to some examples would be great, other then that I got my script below I think dissolve should connect images is that right? What about how to connect first message window and make it appear first? Basically all three of these I want to go from to bottom as a video clip.
Script below:
MyMessage="Wireless Communications"
MessageClip(MyMessage, 320,240, text_color=color_antiquewhite,
\bg_color=color_blue)
Rails="C:\Users\me\Desktop\1.png"
RailClip=ImageReader(Rails,start=0,end=100,fps=25)
Info(RailClip)
PointResize(RailClip, 320,240, 0,20, 148,148)
Rails2="C:\Users\me\Desktop\3.jpg"
RailClip2=ImageReader(Rails2,start=101,end=200,fps=25)
Dissolve(RailClip+RailClip2,25)
error: 'frame sizes don't match'.