How can i fix the Display 3D sturcture error in alphafold2? - google-colaboratory

I'm a graduate strudent for protein structure.
I'm used to predict protein structure with alphafold2
But recently i have a probrem with the prediction resuts.
When the prediction was done, Display 3d strucrue cell didn't work.
It complained limited GPU usage when i tried to re-run that cell.
So, I purchased the 'colab-pro' to extend the GPU limits
But the Display 3D sturcture cell still doesn't show any figure.
How can fix this problem?
I would be appreciated to receive your assistances
Sincerely,

Related

How to generate a QR Code model 2 on Adafruit Magtag?

I purchased a Adafruit Magtag which is a board with an ESP32-S2 chipset and a 2.9" grayscale E-Ink display, I would like to generate a QR Code Model 2, but it does not appear the Magtag QR Code library supports this model of QR Code, or I am missing something.
Does anyone maybe have some suggestions or more experience, need a bit of a steer in the right direction, any help is greatly appreciated.
I may be wrong, but the main reason you might want to use Model 2 is that the QR code would be distorted on a curved surface. The E-ink display on the MagTag isn't curved. https://www.qrcode.com/en/codes/model12.html

How to pick the right images for an object detection model for only 1 tag?

UseCase: I'm trying to extract certain parts of a screenshot which is taken from a game (with a tf object detection model) and extract the text within this part (custom model for the font used in the game).
I have trained a custom model based on SSD Mobilenet V2 and the object detection works quite okish, but sometimes the bounding box is off. I googled about selecting the right images and the right amount for training the custom model, but I couldn't find a good hint in the right direction.
I try to extract the following (surrounded by red):
The environmen can change:
Resolution of the game can be different (1920x1080, WHQD etc.)
Text in the box is not always the same
I have trained with 120 self made images (1920x1080) (90% for training 10% for test) (all of these images where a screenshot of the game) and as I mentioned the results are okish. Sometimes the detected area is off (cutting the content of the box or including a lot area of the box surroundings).
Maybe someone can help me/answering the following questions:
Could a bigger training dataset increase the accuracy?
Should I also take different resolutions into account when creating the training data?
Would it make sense to feed only the boxes without the rest of the game screenshot into the training? Or should I mix screenshots of the whole game and only box screenshots?
Thank you in advance ! :)

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

I want to add the spp algorithm to my program so that the input and output images don't have to be fixed

https://github.com/nickliqian/cnn_captcha
Here is the git address, which is the code I am using.
Hello, I have a verification code recognition program. Now I can model the model and verify the recognition result, but I can only fix the image size of the input and output. Now I want to add the spp(Spatial Pyramid Pooling) algorithm into my program. I have tried many times and can't solve it normally. Can you help me? Thank you.
https://github.com/nickliqian/cnn_captcha
Let the input and output pictures not be in fixed size

ResourceExhaustedError When running network demo fourth try

I have 1600 videos and I want to make joint annotation label data about videos.
I've already made the open pose network and I put my videos as input of the network and saved the joint data as json file.
When I put my first video data as input, there are no errors. And when I put second, third video as input, there are no errors too.
But When I put the fourth video data as input, I got these error message.
enter image description here
enter image description here
these above images are the error message.(OOM)
The size of first, second, third, fourth video is the same.
When I change name first and fourth video name, I got the same error when putting fourth video.
I think this error is about the graph. but I couldn't know why exactly.
I think there are many genious on stackoverflow. So please answer my question... :)
I solve this problem by using cpu. not using gpu.
I use cpu only in tensorflow for solving this problem. and it works!