PsychoPy Coder : text.stim or not text.stim - psychopy

Situation
I'm using the PsychoPy coder to create a random dot motion task in speed-accuracy trade-off situation. I want to have a letter for fixation point to inform subject if they are in "speed" condition or in "precision" (on every trial), so I first thought of simply drawing a text.stim (like "S" or "P"). But I heard that text.stim was pretty slow to draw and because of the dynamic nature of the RDK task if the text.stim needs to much time I'm afraid that it will impact the display of the dots.
Question
I'm I right?
And if so what would be the best way to draw the "fixation letters"?

Well it seems that I found the answer in the TextStim reference manual so I put it here in case someone needs it:
Performance OBS: in general, TextStim is slower than many other visual stimuli, i.e. it takes longer to change some attributes. In general, it’s the attributes that affect the shapes of the letters: text, height, font, bold etc. These make the next .draw() slower because that sets the text again. You can make the draw() quick by calling re-setting the text (myTextStim.text = myTextStim.text) when you’ve changed the parameters.
So the slowing seems to concern the changing of the attributes which is not my situation.

If you've just got one letter being drawn it shouldn't have a major impact on your RDK, but just check whether frames are being dropped. All these things depend on graphics card and CPU speed so you need to test individually for each machine/experiment

Related

Information about CGAL and alternatives

I'm working on a problem that will eventually run in an embedded microcontroller (ESP8266). I need to perform some fairly simple operations on linear equations. I don't need much, but do need to be able work with points and linear equations to:
Define an equations for lines either from two known points, or one
point and a gradient
Calculate a new x,y point on an equation line that is a specific distance from another point on that equation line
Drop a perpendicular onto an equation line from a point
Perform variations of cosine-rule calculations on points and triangle sides defined as equations
I've roughed up some code for this a while ago based on high school "y = mx + c" concepts, but it's flawed (it fails with infinities when lines are vertical), and currently in Scala. Since I suspect I'm reinventing a wheel that's not my primary goal, I'd like to use someone else's work for this!
I've come across CGAL, and it seems very likely it's capable of all this and more, but I have two questions about it (given that it seems to take ages to get enough understanding of this kind of huge library to actually be able to answer simple questions!)
It seems to assert some kind of mathematical perfection in it's calculations, but that's not important to me, and my system will be severely memory constrained. Does it use/offer memory efficient approximations?
Is it possible (and hopefully easy) to separate out just a limited subset of features, or am I going to find the entire library (or even a very large subset) heading into my memory limited machine?
And, I suppose the inevitable follow up: are there more suitable libraries I'm unaware of?
TIA!
The problems that you are mentioning sound fairly simple indeed, so I'm wondering if you really need any library at all. Maybe if you post your original code we could help you fix it--your problem sounds like you need to redo a calculation avoiding a division by zero.
As for your point (2) about separating a limited number of features from CGAL, giving the size and the coding style of that project, from my experience that will be significantly more complicated (if at all possible) than fixing your own code.
In case you want to try a simpler library than CGAL, maybe you could try Boost.Geometry
Regards,

Unity Physics Optimization (lots of rag doll character)

hi i am developing my new game it is like infinite runner. I am using object pooling for instantiate objects. i have lots of character with animation and rag doll.
Physics are very big on my iPad 3 profiler. when i destroy characters everything is good working. Characters have animator,rag doll and simple waypoint.
How can i optimize that ?
Okay, first take into account the maximum number of characters on screen. As far as I can see you also wish to optimize this as much as possible, so I have a few tips.
First thing I would do is look at the triangl er count and get it as low as possible for each model without sacrificing the aesthetics.
Next I would set up an LOD system where as an object moves further away the detail decreases saving triangles. You should repeat tgis with textures animation and some of the ragdolls.
Once that is done. Look at the more expensive functions called in your code and see if you can make an alternative. Like you have done with object pooling.
Good luck.
You can do some things to improved your physic calculation time spent.
1.- The most important is avoid to use MeshCollider, this is much higher performance overhead. Use primitive colliders ever you can or combine few primitives.
2.- Adjust Fixed TimeSteep setting. You can reduce overhead reducing physic accuracy.

When to use VK_IMAGE_LAYOUT_GENERAL

It isn't clear to me when it's a good idea to use VK_IMAGE_LAYOUT_GENERAL as opposed to transitioning to the optimal layout for whatever action I'm about to perform. Currently, my policy is to always transition to the optimal layout.
But VK_IMAGE_LAYOUT_GENERAL exists. Maybe I should be using it when I'm only going to use a given layout for a short period of time.
For example, right now, I'm writing code to generate mipmaps using vkCmdBlitImage. As I loop through the sub-resources performing the vkCmdBlitImage commands, should I transition to VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL as I scale down into a mip, then transition to VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL when I'll be the source for the next mip before finally transitioning to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL when I'm all done? It seems like a lot of transitioning, and maybe generating the mips in VK_IMAGE_LAYOUT_GENERAL is better.
I appreciate the answer might be to measure, but it's hard to measure on all my target GPUs (especially because I haven't got anything running on Android yet) so if anyone has any decent rule of thumb to apply it would be much appreciated.
FWIW, I'm writing Vulkan code that will run on desktop GPUs and Android, but I'm mainly concerned about performance on the latter.
You would use it when:
You are lazy
You need to map the memory to host (unless you can use PREINITIALIZED)
When you use the image as multiple incompatible attachments and you have no choice
For Store Images
( 5. Other cases when you would switch layouts too much (and you don't even need barriers) relatively to the work done on the images. Measurement needed to confirm GENERAL is better in that case. Most likely a premature optimalization even then.
)
PS: You could transition all the mip-maps together to TRANSFER_DST by a single command beforehand and then only the one you need to SRC. With a decent HDD, it should be even best to already have them stored with mip-maps, if that's a option (and perhaps even have a better quality using some sophisticated algorithm).
PS2: Too bad, there's not a mip-map creation command. The cmdBlit most likely does it anyway under the hood for Images smaller than half resolution....
If you read from mipmap[n] image for creating the mipmap[n+1] image then you should use the transfer image flags if you want your code to run on all Vulkan implementations and get the most performance across all implementations as the flags may be used by the GPU to optimize the image for reads or writes.
So if you want to go cross-vendor only use VK_IMAGE_LAYOUT_GENERAL for setting up the descriptor that uses the final image and not image reads or writes.
If you don't want to use that many transitions you may copy from a buffer instead of an image, though you obviously wouldn't get the format conversion, scaling and filtering that vkCmdBlitImage does for you for free.
Also don't forget to check if the target format actually supports the BLIT_SRC or BLIT_DST bits. This is independent of whether you use the transfer or general layout for copies.

Working around WebGL readPixels being slow

I'm trying to use WebGL to speed up computations in a simulation of a small quantum circuit, like what the Quantum Computing Playground does. The problem I'm running into is that readPixels takes ~10ms, but I want to call it several times per frame while animating in order to get information out of gpu-land and into javascript-land.
As an example, here's my exact use case. The following circuit animation was created by computing things about the state between each column of gates, in order to show the inline-with-the-wire probability-of-being-on graphing:
The way I'm computing those things now, I'd need to call readPixels eight times for the above circuit (once after each column of gates). This is waaaaay too slow at the moment, easily taking 50ms when I profile it (bleh).
What are some tricks for speeding up readPixels in this kind of use case?
Are there configuration options that significantly affect the speed of readPixels? (e.g. the pixel format, the size, not having a depth buffer)
Should I try to make the readPixel calls all happen at once, after all the render calls have been made (maybe allows some pipelining)?
Should I try to aggregate all the textures I'm reading into a single megatexture and sort things out after a single big read?
Should I be using a different method to get the information back out of the textures?
Should I be avoiding getting the information out at all, and doing all the layout and rendering gpu-side (urgh...)?
Should I try to make the readPixel calls all happen at once, after all the render calls have been made (maybe allows some pipelining)?
Yes, yes, yes. readPixels is fundamentally a blocking, pipeline-stalling operation, and it is always going to kill your performance wherever it happens, because it's sending a request for data to the GPU and then waiting for it to respond, which normal draw calls don't have to do.
Do readPixels as few times as you can (use a single combined buffer to read from). Do it as late as you can. Everything else hardly matters.
Should I be avoiding getting the information out at all, and doing all the layout and rendering gpu-side (urgh...)?
This will get you immensely better performance.
If your graphics are all like you show above, you shouldn't need to do any “layout” at all (which is good, because it'd be very awkward to implement) — everything but the text is some kind of color or boundary animation which could easily be done in a shader, and all the layout can be just a static vertex buffer (each vertex has attributes which point at which simulation-state-texel it should be depending on).
The text will be more tedious merely because you need to load all the digits into a texture to use as a spritesheet and do the lookups into that, but that's a standard technique. (Oh, and divide/modulo to get the digits.)
I don't know enough about your use case but just guessing, Why do you need to readPixels at all?
First, you don't need to draw text or your the static parts of your diagram in WebGL. Put another canvas or svg or img over the WebGL canvas, set the css so they overlap. Let the browser composite them. Then you don't have to do it.
Second, let's assume you have a texture that has your computed results in it. Can't you just then make some geometry that matches the places in your diagram that needs to have colors and use texture coords to look up the results from the correct places in the results texture? Then you don't need to call readPixels at all. That shader can use a ramp texture lookup or any other technique to convert the results to other colors to shade the animated parts of your diagram.
If you want to draw numbers based on the result you can use a technique like this so you'd make a shader at references the result shader to look at a result value and then indexes glyphs from another texture based on that.
Am I making any sense?

Harder, Better, Faster, Stronger... Techniques for an image-based CAPTCHA?

There are lots of non-image-based CAPTCHA ideas floating around. But what about the old-fashioned way?
What are the elements of a good image CAPTCHA? What visual elements are hard for computers, but easier for humans? What about mistakes, elements that are easier for computers than they are for humans? What are good techniques for increasing the speed of a CAPTCHA generator?
Here's an example of a CAPCHA I've been working on. It generates the functions for two sine waves, then stretches a text between them. It lays that over a background drawn from a pool of images.
How could this be improved? (Specifically, I'm using PHP GD.) Things that come to mind are:
Change the color of the text, possibly making it multicolored.
Add "scratches" or marks that mildly obscure the text.
Add to the distortion so that it's affected by sine waves horizontally as well.
What goes into a superb image CAPTCHA?
Edit:
I know that there are some very worthy third-party CAPTCHA resources. I'm looking for attributes that make them good. I'd like to use my own CAPTCHAs, just for the purpose of self-improvement. So, you can talk about reCAPTCHA, but it's not exactly what I'm looking for.
Also, it has been brought up that not only the image, but also the experience matters, so feel free to comment on that.
Make each letter/number out of a pattern, I.E. unconnected dots. Meaning the computer has no way of knowing that a dot is part of a letter other than pattern recognition (which they don't have yet.) Then the usual distortions and random lines.
How you do this is the challenge.
EDIT: Also, bonus points for patterns of different shapes, and try alpha transparency on the characters (on the edges or the whole character), so they merge with the background.
Make letters difficult to separate. Use handwriting-like font or add lines that join letters. Decrease and randomize spacing between letters.
Add wave distortion in other axis too. Distortion in one axis only can be relatively easily analyzed and reversed.
Don't bother with color background at all. It's super-easy to automatically filter black from other colors. Your background hinders only humans.
Don't add scratches or other noise unless it has the same thickness as letters. Noise-removal algorithms can easily remove things that are thinner than letters.
What if the color of the letters faded into other colors... for instance the 5 can start off as yellow on top and fade into blue or something. The colors chosen should be random.
With the multicolored background it might make it hard for the computer to pickup where the background ends and the character begins.. and hopefully it would not be too difficult for the human to actually pick up the pattern.
Instead of generating captcha you can create a captcha table in your database and you yourself create the table by search on google for good captcha images.
So no need to worry "Will this generation method work?"
I really hate CAPTCHA on sites, they just annoy me, but if you want to try and make a robust one try the following:
Ability to get a new image without submitting
Spoken version for the visually impaired
Non-uniform characters
I've used Recaptcha on a few sites, it's a nice and robust solution.
Or if you want to be really funky about it check out this: http://research.microsoft.com/asirra/
Algorithms that try to break captcha are pattern matchers that work by a few different ways: scaling and skewing the symbols that they already know about, finding and tracing edges, and counting interior holes to help. If you can break the letter up into pieces, vary the letter quality, or add strong lines or “scratches” along the letters these techniques will help. However all of this is fairly moot considering we have recaptcha for this purpose and it’s a wonderful third party app for this. Additionally captcha will help the security of your site, but will not stop those who are truly enticed.
I like the idea of KittenAuth and Microsoft's Asirra project. The idea is that, while OCR will eventually evolve to break your traditional captcha, the ability to distinguish a kitten from a dog is many orders of magnitude more complex a problem, while absolutely trivial for humans.
This solution, while probably the sexiest captcha idea ever, has the limitation of not being easily portable to hearing-impaired methods.
What about shearing and shuffling bands to mangle display and mouse-only input?
Start by taking your sine-wave morphed text, divide into horizontal bands or maybe even a grid.
That makes optical recognition harder and might allow you to avoid the kind of nasty background games that make some captchas hard for humans.
For a site where you can rely on local drag in the browser, instead of typing in an entry use shuffling requiring the user to re-order pieces (just in sloppy order, not like one of those puzzles). Or, if you wanted to use clicks alone, the classic sliding tile puzzle.
Note, I've run into a captcha where you had to identify which of N cartoons had an animal in them which succeeded in blocking me!
Wellington Grey sums up the AI CAPTCHA race nicely.
You could add a random array of fonts so that GD renders each character using a different one.
Be wary of suggestions of ReCaptcha. I have submitted incorrect input into it a couple few dozen times, and have had success each time. Several of those times I have submitted incorrect input for both words rather than just the most obscured word; the success rate, as I said, has been 100%.
I also think that image-based CAPTCHAs are user-hostile and should be avoided wherever possible. The advantage of text-based solutions is that you can tailor them to your site's audience, adding a level of obscurity that may trip up machines as they become more savvy with text-based solutions.
At the very least, don't use this all the time:
(source: codinghorror.com)