Meaning of "Leave the graphics context in any state" - core-graphics

I use "Core Text" function CTLineDraw. But looks like it has some side effect on CGContextRef: filling rectangles does not work any more after CTLineDraw for the same context (but works before or if I commented out CTLineDraw call).
According to CTLineDraw and some other functions in "Core Text" docs:
This call can leave the graphics context in any state and does not
flush the context after the draw operation.
I think this may be related. But what exactly does this sentence means? What I should save & restore context state (this helps)?

Yes, you should save the state before the first call to CTLineDraw you’re doing and restore it after the last one. What that line in the documentation means is that Core Text sets various bits of the state internally to do the text drawing you’re asking it to do and doesn’t automatically set them back afterwards.

Related

write into popup

I have to display a popup for a legend like in STMS transaction
I know how to write this tab with WRITE statement, but how can I display it in a popup?
You can achieve this by using CALL SCREEN ... STARTING AT ..., then using SUPPRESS DIALOG in the PBO processing to bypass the screen (dynpro) processor. Then, in the PAI processing, use LEAVE TO LIST-PROCESSING followed by the WRITE statements. You can follow this in the function module TMS_UI_POPUP_LEGENDE that shows the popup you mentioned as a reference. The procedure is documented in the online help as well.
In an ABAP dialog application, you're either working with screens or with (interactive) lists. To get a popup window, you have to create and CALL a custom screen (dynpro). Inside that screen, you hand over control to the list processor. That's the component responsible for taking what ever you WRITE and place it somewhere on the screen. For some - probably mostly historical - reason, the command to do so is LEAVE TO LIST-PROCESSING. I suppose that at some point, the intended flow between screens and lists was different from what it has become today, and that was the reason for naming the command this way. From a modern point of view and especially in your use case, the LEAVE aspect does not make any sense, so just take it as it is and use it.
Also note that it's LEAVETOLIST PROCESSING - LEAVE LIST-PROCESSING without TO is the opposite statement!

display depth of stimulus (front, back, behind in front)

According to http://www.psychopy.org/api/visual/textstim.html 'depth' is now deprecated and 'Depth is now controlled simply by drawing order.'
I'm using Builder 1.80.06 and have most stimuli defined in Routine dialogs but I need to draw some at runtime using code and I want them to go behind the other stimuli but can't work out how to do this.
Is there any way this can be done now?
The Code Component code really is inserted in order, as well as the code form standard components. The order of code in different Routines during creation is unspecified (you don't have control over which Routine's "Begin Experiment" code is executed first) but this doesn't affect your drawing depth anyway.
The key is that in your Routine the code in the "Every Frame" section, with the draw() command, has to be in the right order (before your standard components).
UPDATE: given the new details, I believe that Jon's answer is the correct one.
OLD ANSWER: Since 1.72.00, the order of drawing in Builder is controlled by their order in the rutine. The topmost component is drawn first, then then the second on top and so on. The bottom component is always on top.
The order of components in a routine can be changed by right-clicking on a component to bring up a contextual menu with items like "move up", "move down", "move to top", etc.
As a side note: in code, the drawing order is simply the order of the lines of code:
background.draw()
stim.draw() # on top of background
fixationCross.draw() # on top of the other.
win.flip() # show it
You can verify that Builder does exactly this by looking at the python code it generates.

What is the most efficient way to quickly understand how a complex LabView VI works?

What is the best way to understand a complex LabView VI that controls a motor?
My goal is to control the motor from a joystick.
The wiring diagram shown below allows a LabView user to control the motor from the LabView GUI: move a slider up and down either increasing or decreasing the desired velocity. As the slider's value changes, it is fed into a bunch of math controls and eventually gets converted into a command string for the motor to interpret. This command string, if I understand correctly, is bunch of bytes that get written to the serial port.
Instead of using the LabView GUI to control the motor, I would like to use the joystick.
What is the best way to approach this?
The joystick has pitch,yaw,roll,and throttle. Which one relates best to the velocity of a motor?
The answer to your title "What is the most efficient way to quickly understand how a complex LabView VI works?" is probably to do some combination of the following:
Look at the VI's inputs and outputs to try and understand what they are there for. The label and caption of controls and indicators may be helpful, also right-click to check the description and tip.
As well as controls and indicators, look for other I/O: queues, notifiers, global variables, file read/writes, instrument communications, and for any data storage that persists between calls such as an uninitialised shift register.
Look at the overall structure of the VI to see how it executes, e.g. is it a one-off operation, does it execute different cases depending on some input, does it loop until a certain condition happens, does it use a state machine structure, etc
Break down the VI's structure into smaller pieces that you can understand. You could print the diagram out and annotate it by hand, or add frame decorations and text comments to the diagram to record what you deduce. If the diagram is cluttered or poorly laid out, rearrange it as you go along (use Ctrl-click and drag on the diagram background to add blank space where you need it).
Set probes on key wires and watch them while the VI runs to see what happens
If possible, manually set the VI's controls to example values and run it to see what happens (this may not work if the VI depends on other parts of a program running at the same time)
Write a test wrapper VI that calls the complex VI and supplies it with example data or inputs to see what happens.
To address your specific question about the VI diagram you've posted, I can see various controls for quantities such as Velocity, Position, Amplitude, Max A (amplitude?), Frequency and so on. You need to decide which of these quantities should be controlled by which axis or output of your joystick. Then you need to add code that reads those values from your joystick, and modify the existing code so that the parameters you want to control are supplied by the joystick values instead of the front panel controls. You could probably just put the joystick reading code inside the existing loop, wire the joystick outputs to join up with the wires from the front panel controls you want to replace, and then change the relevant front panel controls to indicators from the right-click menu so that they will show the values you are getting from the joystick.
The best way is to write one from scratch. But you could analyse the code by clicking the Highlight Execution button to display an animation of the block diagram execution when you run the VI, and use probes to check intermediate values. And you probably should also do an on-line course, e.g. LabVIEW Training: Learn LabVIEW in Three or Six Hours
My answer to your third question is "throttle.".

UITextInput - Is it OK to return Incorrect 'beginningOfDocument' & 'endOfDocument'?

I'm creating my own Text Editor in iOS using Core Text. Pretty much everything works great with one exception: Stuff really starts to slow down when the text document is "large". I've discovered that iOS is requesting the entire document text on every change, including selection changes (at least, when I notify the UITextInputDelegate of selection changes). Part of the problem is that I've already optimized my Core Text code by splitting up the document into paragraphs and rendering only the paragraphs that change. But doing this also split up the document string (which is a NSAttributedString) into the separate 'paragraph objects'. So when iOS requests the entire text document, I have to combine all those strings into one string, which takes time and memory.
My solution is to give iOS incorrect UITextPosition's for the beginningOfDocument and endOfDocument methods, limiting those positions to the paragraph(s) intersecting the current selection. This is actually working very well. iOS is now only requesting the current paragraph(s) of the change, which has completely eliminated the slow-down.
So far, so good, but I'm a little worried that this might break something. I've tested this a bit and nothing is broken, but Text Editors can be hard to test (who knows if it'll break in some edge condition).
I have 2 question:
Should iOS be requesting the entire document text on each change? If not, then perhaps some other method in my UITextInput protocol methods that are returning the wrong value, somehow causing iOS to request the entire document.
Does anyone know if this will actually break anything?
Alright, I've been testing this for quite a while now and I've finally found a place where using this technique will break functionality. UITextInput uses beginningOfDocument and endOfDocument to determine whether it has room to "move" when you press the arrow keys on a bluetooth keyboard. Returning only the beginning and end of the currently selected paragraph(s) cause it to ignore the 'arrow' buttons when it is at the beginning or end of that paragraph and those arrows indicate an attempt to move outside what it thinks is the beginning/end of the document. It's easy enough to fix. If the currently selection begins at the beginning/end of a paragraph, I now also return the previous/next paragraph as part of the document, respectively.

ITfTextInputProcessor::Deactivate gets called unexpectedly on regaining focus

I am implementing a text service on windows. Things work fine. However when I shift window focus to another application and shift focus back to the original application, the selected text services gets de-activated (I notice a call to ITfTextInputProcessor::Deactivate). I think this call is unexpected. Post this call, The service has to be re-activated manually. I am surely doing something goofy. Just that I don't know what it is.
Offhand, I'd say that you are indeed doing something goofy. :) In particular, I'd pay careful attention to your ITfThreadMgrEventSink::OnSetFocus implementation (and, obviously, you need to implement ITfThreadMgrEventSink in your text service and connect it via AdviseSink if you haven't already.)
After more research, I've figured out what’s happening:
When you set focus back to Word, TSF gets the current thread’s active keyboard layout (actually a locale ID).
It then compares that keyboard layout with the language ID of the currently active text service.
If they’re different, TSF then activates the text service for the active keyboard layout, and deactivates any previously active text service.
I believe this behavior is different on Vista/Windows 7.
The fix would be to use LoadKeyboardLayout/ActivateKeyboardLayout to set the process keyboard layout in your ITfTextInputProcessor::Activate implementation. Apparently some apps also need you to call ITfInputProcessorProfiles::ChangeCurrentLanguage() as well.