----------------- ORIGINAL QUESTION ------------------------
In Vulkan,
In order to begin issuing commands to secondary command buffers, is it mandatory to have already acquired the image and have called vkCmdBeginRenderPass() on the primary command buffer?
I'm a noob but that's what it seems like, to me.
------------------------ EDIT #2 --------------------------------
Yes, it is possible to do this:
Possibly asynchronously, process logic and record draw-calls in secondary command buffers.
Check if secondary command buffers have been recorded: if not, goto #1, else continue.
Acquire Image
Start primary command buffer; start renderpass.
Execute previously recorded secondary command buffers
Submit
Present
It depends on what do You mean by "issuing" commands to secondary command buffers and on commands You want to record in those secondary command buffers.
Not all commands can be recorded in secondary command buffers. But there are commands that can be recorded and which don't have anything to do with rendering (and thus with render passes) - data copy, timestamping (timer queries) these are examples. They are not connected in any way with render passes, so they don't require You to start a render pass.
But if You want to record drawing commands, and as You probably know, drawing can only be done from within render passes, then render pass needs to be started already (in the primary command buffer which calls this secondary command buffer).
As for vkAcquireNextImageKHR() function - this function is independent. If by "issuing" You mean recording, then You don't need to call this function. You can record any (valid) commands You want. Recording is just preparing commands for later use, for submission. The same applies to the title of Your question:
Possible to generate secondary command buffers before renderpass?
I know this is (hopefully) only a bad wording, but You can record any command buffer any time You want. It's the submission that counts and the order of commands recorded in the submitted command buffers. So how do You want to generate a command buffer before a render pass? If You want to record drawing commands and start a render pass, You need a render pass object. If You want to call a secondary command buffer from within a primary command buffer, and if this secondary command buffer draws something, then You need to record a render pass starting command first. After that You can call a secondary command buffer. But this secondary command buffer must be already recorded:
Each element of pCommandBuffers must be in the pending or executable
state.
So You need to record a secondary command buffer first, then You can record a primary command buffer which call this secondary command buffer.
But if You want to submit a command buffer which uses a swapchain image, then this image must be already acquired. As I (and others) have described in Your other question (trouble understanding cycling of framebuffers
), You cannot submit a command which uses a swapchain image, if this image is not yet acquired. But the submission and image acquiring has nothing to do with command buffer recording. You can record command buffers earlier. You can even pre-record various command buffers for various swapchain images. Again, recording is just preparing commands for later use. The actual usage occurs with submission. So You can only submit those command buffers that use swapchain images which were already acquired.
I hope this helps ;-).
Yes, it is possible to do what I was trying to do, which is to record draw commands to secondary command buffers before having to start recording the primary command buffer.
The problem was two-fold.
I wasn't setting the VK_COMMAND_BUFFER_USAGE_RENDER_PASS_CONTINUE_BIT on the 'flags' member of VkCommandBufferBeginInfo for the secondary buffer.
The error from the validation layer was misleading, which said: "No active render pass found at draw-time in Pipeline (0x12)!". This caused me to attempt wrong solutions.
In order to begin recording a secondary command buffer, you must call vkBeginCommandBuffer with a VkCommandBufferInheritanceInfo object. If you want to execute the secondary CB inside of a render pass, you must provide:
The VkRenderPass object for the pass it will be executed within. Note that this object is not the product of vkCmdBeginRenderPass.
The index of the subpass of the aforementioned VkRenderPass that this secondary CB will be executed within.
There is an optional VkFramebuffer, which specifies the images you would be rendering to. But as previously stated, this is optional. The specification says that providing this data may help performance, but it is optional nevertheless.
So no, there is nothing about secondary CBs that requires that there is an active render pass instance on the primary CB that it will be executed within.
Related
Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.
Problem:
I'm running a script that includes an infinite loop and I would like to exit from this loop with user input. I don't want to use the standard "input" function because that pauses the execution of the loop while it waits for the user inputs. I want that the program keeps looping all the time (until some certain keyboard input is given). I don't want to exit from the loop with ctrl+c either because then the program shut down procedures that are located after the loop, are not executed.
Question:
In octave, when something is typed to the command window during execution of a script nothing is shown in the command window until the script has ended. From this it is clear that keyboard inputs that are given during the execution of a script are stored somewhere (is this right?). And now the big guestion is where? And how can I access this data?
I'm running Octave 4.0.0 in Win7
p.s. also other suggestions for stopping the loop are welcome
Use kbhit:
while (1)
if (kbhit (1) == 'x')
break
endif
sleep (0.2)
printf ("Loop is running...\n");
fflush (stdout);
endwhile
Or if you want to exit with ctrl-c and finalize your script use a unwind_protect, unwind_ptrotect_cleanup block
unwind_protect
while (1)
sleep (0.2)
printf ("Loop is running...\n");
fflush (stdout);
endwhile
unwind_protect_cleanup
disp ("doing my cleanup");
end_unwind_protect
In octave, when something is typed to the command window during execution of a script nothing is shown in the command window until the script has ended. From this it is clear that keyboard inputs that are given during the execution of a script are stored somewhere (is this right?). And now the big guestion is where? And how can I access this data?
This is called buffering and is a very common behaviour. The principle is simple. Instead of writing everything as it's ready, your system will keep it on a buffer and only write it when told to do so, or when the buffer is full. Many disks operations work like this, for example, when you copy a few small files into a USB stick (dependent on the mount options). Once the buffer is full, or when you click to eject the USB, stick, the system will then actually perform the write.
If you read the section Paging Screen Output of the Octave manual you will see:
Normally, no output is displayed by the pager until just before Octave is ready to print the top level prompt, or read from the standard input (for example, by using the fscanf or scanf functions). This means that there may be some delay before any output appears on your screen if you have asked Octave to perform a significant amount of work with a single command statement. The function fflush may be used to force output to be sent to the pager (or any other stream) immediately.
Alternatively, you can turn off this buffering with page_output_immediately():
Query or set the internal variable that controls whether Octave sends output to the pager as soon as it is available.
Otherwise, Octave buffers its output and waits until just before the prompt is printed to flush it to the pager.
I'm writing a program to control two similar devices in Labview. In order to avoid copying the code I use subVIs. But I have a piece of code where I update some values on the GUI inside a while loop. I'd like to know if it is possible to somehow have this loop inside my subVI and have the subVI sending one of the output parameters after each iteration.
To update your GUI from within a subVI you can do one of the following:
Create a queue or notifier in your top level VI and pass the reference in to your subVI. In the subVI, send the data to the queue or notifier. In the top level VI, have a loop that waits for data on the queue or notifier and writes that to the front panel indicator.
Create a control reference to the front panel indicator in the top level VI and pass the reference to your subVI. In the subVI, use a property node to write the Value property of the indicator.
If you look at the LabVIEW help for the terms in bold you'll find documentation and examples for how to use them.
Of these options, I would use a queue for any data where it's important that the top level VI receives every data point (e.g. if the data is being plotted on a chart or logged to a file) or a notifier where it's only necessary that the user sees the latest value. Using control references for this purpose is a bit 'quick and dirty' and can cause performance issues.
If you need to update more than a couple of indicators like this, you'll probably want to build a cluster containing the data you send to the queue/notifier, or containing the control references. Save your cluster as a typedef so that you can modify its contents without breaking your code.
Another option is a channel wire. A channel wire will send data from a producer loop to a consumer loop without the overhead of a reference & property node and without having to create and close a queue or notifier reference. If you make a simple vi with writer and reader loops as shown in the LabView Help, then select the writer loop and go to Edit -> Create SubVI, you'll have a template to use.
I'm planning on developing a Monopoly game using a Console application in VB.NET, but with a separate GUI (probably a Forms application) that displays the state of the Monopoly board based on the information in the Console application, so that it can be ignored or used as the players wish. I've been looking into ways of sending information between two programs, and came across Pipes, except they seem complex and I'd like to use a different method if I can avoid it. The following is the methodology I'm currently considering to send information - I'd like to know if there is any way I could improve this methodology, or if you think it's completely stupid and I should just use Pipes instead -
Program 1 is the Console application which controls everything: the state of the game depends on the Console. Program 2 is the GUI/Forms application which follows instructions sent by Program 1 and displays the board accordingly. Program 1 and Program 2 communicate using two text files, Command.txt and CommandAvailable.txt. When something changes on Program 1 - e.g. a player makes a move - a command string is made and added to a queue. Program 1 continually checks CommandAvailable.txt to ensure that the file is empty, and if so, it clears Command.txt and then appends every command string in the queue to Command.txt. When it has finished, arbitrary text is added to CommandAvailable.txt, e.g. "CommandAvailable".
Program 2 continually checks CommandAvailable.txt until it is not empty, meaning that Program 1 has added at least one command to Command.txt. Program 2 then reads every instruction on Command.txt and adds it onto a queue on the other side. CommandAvaiable.txt is then cleared, which will permit Program 1 to add more Commands to Command.txt (because it only adds commands when CommandAvailable.txt is empty and hasn't already been marked by itself.) A separate thread on Program 2 empties the queue of command strings, parses them and executes them.
For example, in the Console, Player 1 may move to Trafalgar Square (or whatever the square would be called.) Program 1/Console would add the Command "move player1 trafalgar_square" to the queue, then check CommandAvailable.txt, and if it is empty, add all the commands in the queue to Command.txt. Program 2/The GUI would check CommandAvailable.txt and as it had been marked by Program 1, read the command, add it to the queue, and then move a picturebox that represents Player 1 to a square.
Please let me know if you think this methodology could be improved, or if you think it's simply stupid and there are far better alternatives or that I should just use Pipes instead. I'm going to be using VB.NET.
Okay, I'll try to explain as good as I can... Quite a particular case.
Tools: SSIS 2008
We have a control flow that now needs to be triggered by an event: the presence of one or multiple files. (1,2 or 3)
The variables used:
BO_FileLocation_1
BO_FileLocation_2
BO_FileLocation_3
BO_FileName_1
BO_FileName_2
BO_FileName_3
There can be one, two or three files: defined in above variables. When they are filled in,
they should be processed. When they are empty, this means there's just one file file, the process should ignore them and jump to the next (file watcher?) task.
For example:
BO_FileLocation_1= "C:\"
BO_FileLocation_2 NULL
BO_FileLocation_3 NULL
BO_FileName_1= "test.csv"
BO_FileName_2 NULL
BO_FileName_3 NULL
The report only needs one file.
I'd need a generic concept that checks the presence of these files, it could be more generic than my SSIS knowledge can handle right now. For example handy, when there's a 4th file in the future. I was also thinking to work with a single script to handle all the logic.
Thanks in advance
A possibly irrelevant image:
If all you want is to trigger the Copy Source File to handle if one or more of the files is present, just use the OR Constraint in your flow. The following image shows you how:
First connect all to the destination:
Then click one of the green arrows. This will make its properties window pop up. Select the Logical ORinstead of the Logical AND:
If everything went well, you should now see the connections as dashed lines:
There are several possible solutions:
Create a sequence container and include all the file imports in the sequence container. Add int variables for RowCountFile1, RowCountFile2, and RowCountFile3 and set the value to 0 (this is the default value when you create an int variable). Add a RowCount transformation to each of the data flows. Create a precedence constraint from the sequence container to the "Do something" task. Set the precedence constraint to success and expression. Set the expression value to #RowCountFile1 > 0 || #RowCountFile2 > 0 || #RowCountFile3 > 0. The advantage of this approach is that you can take an action as soon as the files are detected, you import all available files, and you only take an action after all the files have been imported. You could then schedule running this SSIS package as a SQL Server Agent job step and run it as frequently as you want.
A variant on solution 1 is to use for each file enumerator containers inside the sequence container. This would be useful if you don't know the exact name of the file and you expect to import more than one under some circumstances. For instance, if you get a file every few minutes with a timestamp in its file name and your process doesn't run for some reason, then you may have to process multiple files to get caught up and then take an action once it has been done.
You could use the file watcher task as you outlined in your question. The only problem I have with the file watcher task is that the package has to be in a constantly running state. This makes it hard to troubleshoot problems and performance. It also can introduce other problems since I remember having some problems with the file watcher task years ago when it first came out. It may well be a totally stable task now, but I prefer other methods over the task after having been burned previously. If you really want the package to run continously instead of having it be called by a job, then you could always use a script task to check for file, sleep thread if not found, check again, etc. I'm sure that's what the file watcher task does, but I would trust my own C# over the task. Power to anyone who has had better experiences than me with File Watcher...
Use PowerShell. If you just want to take an action if a file appears and you aren't importing the data, then a PowerShell script could do this just as well as a SSIS package. The drawback is that you have to learn some basic PowerShell, it may be hard to maintain in the future since PowerShell is probably not your bread and butter core language, and you may have to rewrite the code again to a SSIS package if you want to import the data. You would probably call the PowerShell script from a SQL Server Agent job step, so scheduling can be handled pretty easily.
There are more options than what I listed, so let me know if you still want more suggestions.