I am in vb.net and have a function that will be accessed by multiple threads. Everything inside the function uses local variables. However, each thread will pass in its own dataset by reference.
From what I have read the local variables should be no problem, but I think the dataset coming in is a concern.
How should I control access / execution of this function to make sure that it is thread safe? Thanks.
Assuming that by 'dataset' you mean a System.Data.DataSet, if your function is only reading from the dataset, then you don't need synchronization in any case since "This type is safe for multithreaded read operations" (from http://msdn.microsoft.com/en-us/library/system.data.dataset.aspx).
If you're modifying the dataset, then as long as each dataset is a different instance, there should be no problem.
If you're modifying the data and if different threads might pass in a reference to the same dataset, you'll need to synchronize access to the dataset using a Monitor (SyncLock or lock in C#) or some other synchronization technique.
Related
So a VkSampler is created with a VkSamplerCreateInfo that just has a bunch of configuration settings, that as far as I can see would just define a pure function of some input image.
They are described as:
VkSampler objects represent the state of an image sampler which is used by the implementation to
read image data and apply filtering and other transformations for the shader.
One use (possibly only use) of VkSampler is to write them to descriptors (such as VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) for use in descriptor sets that are bound to pipelines/shaders.
My question is: can you write the same VkSampler to multiple different descriptors? from the same or multiple different descriptor pools? even if one of the current descriptors is in use in some currently executing render pass?
Can you use the same VkSampler concurrently from multiple different render passes / subpasses / pipelines?
Put another way, are VkSamplers stateless? or do they represent some stateful memory on the device and so you shouldn't use the same one concurrently?
VkSampler objects definitely have data associated with them, so it would be wrong to call them "stateless". What they are is immutable. Like VkRenderPass, VkPipeline, and similar objects, once they are created, their contents cannot be changed.
Synchronization between accesses is (generally) needed only for cases when one of the accesses is a modification operation. Since VkSamplers are immutable, there are no modification operations. So synchronization is not needed for cases where you're accessing a VkSampler from different threads, commands, or whathaveyou.
The only exception is the obvious one: vkDestroySampler, which requires that submitted commands that use the sampler have completed before calling the function.
I'm new to Object Oriented Programming. I'm working on an application, which takes 2 URLs, fetches their source codes and parses them, and shows results based on some metrics. I'm planning to create a class, make all the metrics it's instance variables, then create 2 instances of this class (1 for each URL), pass the url in the constructor at object initialization and then initialize all the instance variables based on some computation inside the constructor itself. The values of some of the instance variables may depend on the values of other instance variables. Is it a good programming practice to do it the way I'm planning to?
This should be fine, as long as you initialize them in the correct order in your constructor. It's not good practice to calculate instance variable values at declaration, but doing so in the constructor should be no problem.
I would at least separate the construction of the object and the initialization, the latter being fetching the source code and calculating the metrics. Fetching is slow and may fail due to external circumstances. Calculation may fail due to unexpected format/content. You would want the setup to be quick and sure to succeed. You could then do the risky stuff by calling a method and if anything fails you will at least have access to the history/state through the already created object. It would make your object more versatile.
Question
Is there a way to programmatically set what FPGA variables I am reading from or writing to so that I can generalize my main simulation loop for every object that I want to run? The simulation loops for each object are identical except for which FPGA variables they read and write. Details follow.
Background
I have a code that uses LabVIEW OOP to define a bunch of things that I want to simulate. Each thing then has an update method that runs inside of a Timed Loop on an RT controller, takes a cluster of inputs, and returns a cluster of outputs. Some of these inputs come from an FPGA, and some of the outputs are passed back to the FPGA for some processing before being sent out to hardware.
My problem is that I have a separate simulation VI for every thing in my code, since different values are read from and returned to the FPGA for each thing. This is a pain for maintainability and seems to cry out for a better method. The problem is illustrated below. The important parts are the FPGA input and output nodes (change for every thing), and the input and output clusters for the update method (always the same).
Is there some way to define a generic main simulation VI and then programmatically (maybe with properties stored in my things) tell it which specific inputs and outputs to use from the FPGA?
If so then I think the obvious next step would be to make the main simulation loop a public method for my objects and just call that method for each object that I need to simulate.
Thanks!
The short answer is no. Unfortunately once you get down to the hardware level with LabVIEW FPGA things begin to get very static and rely on hard-coded IO access. This is typically handled exactly how you have presented your current approach. However, you may be able encapsulate the IO access with a bit of trickery here.
Consider this, define the IO nodes on your diagram as interfaces and abstract them away with a function (or VI or method, whichever term you prefer). You can implement this with either a dynamic VI call or an object oriented approach.
You know the data types defined by your interface are well known because you are pushing and pulling them from clusters that do not change.
By abstracting away the hardware IO with a method call you can then maintain a library of function calls that represent unique hardware access for every "thing" in your system. This will encapsulate changes to the hardware IO access within a piece of code dedicated to that job.
Using dynamic VI calls is ugly but you can use the properties of your "things" to dictate the path to the exact function you need to call for that thing's IO.
An object oriented approach might have you create a small class hierarchy with a root object that represents generic IO access (probably doing nothing) with children overriding a core method call for reading or writing. This call would take your FPGA reference in and spit out the variables every hardware call will return (or vice versa for a read). Under the hood it is taking care of deciding exactly which IO on the FPGA to access. Example below:
Keep in mind that this is nowhere near functional, I just wanted you to see what the diagram might look like. The approach will help you further generalize your main loop and allow you to embed it within a public call as you had suggested.
This looks like an [object mapping] problem which LabVIEW doesn't have great support for, but it can be done.
My code maps one cluster to another assuming the control types are the same using a 2 column array as a "lookup."
I'm including my own branching rule on SCIP and I'm using the SCIPincludeBranchruleMybranchingrule() function to initialize some branching rule data. One of the things I do is to call the SCIPgetNVars() function. When I run the code, I see that the function is called many times (not once, as I thought, before the B&B algorithm starts) and I get the following error triggered by the SCIPgetNVars() function:
[src/scip/scip.c:10048] ERROR: invalid SCIP stage <0>
I'm confused about the use of SCIPincludeBranchruleMybranchingrule(), since the documentation states that this function can be use to initialize branching rule data. I would like to initialize some data that can be used at every B&B node, maybe the branching rule data is not the right way of doing it.
I'll appreciate any help!
The important thing to note here is that there is no problem available yet for which you want to access the variables.
Branching rules of SCIP provide several callbacks for data initialization. The include-
callback is only called once when SCIP starts, aka in the SCIP_STAGE_INIT stage of SCIP.
At this stage, you want the branching rule to inform SCIP that it exists, and optionally introduce some user parameters that are problem-independent.
There are two more callback-functions that allow for storing data which are better suited for what you intend to do; SCIPbranchruleInitsolMybranchingrule which is called just before the (presolved)
problem is about to be solved via branch-and-bound, and SCIPbranchruleInitMybranchingrule, which is called after a newly read problem was transformed.
Since the execution of a branching-rule is restricted to within the branch-and-bound-process, your callback is SCIPbranchruleInitSolMybranchingrule which you should implement by moving all problem specific data initialization there. Don't forget to also implement SCIPbranchruleExitsolMybranchrule to free the stored data every time the branch-and-bound search is terminated, either if search was terminated, or if a time limit was hit, or SCIP decided that it wants another restart.
FYI: Data that is allocated during the include-callback can be freed with the SCIPbranchruleFreeMybranchingrule-callback, which is executed once when SCIP is about to exit and free all left system-memory.
I have a question about thread safety using XML in VB.NET.
I have an application that manages an XmlDocument object as the user creates new items/makes changes to existing items. I already know that I need to synchronize calls to XmlDocument.CreateElement(...). My question is, can I then proceed to build the returned element without synchronization, then just synchronize again when appending that element into the XmlDocument?
This is what I think I can do, I just need to make sure it is thread-safe like I think it is:
' "doc" object already exists as an XmlDocument
SyncLock doc
Dim newsub As XmlElement = doc.CreateElement("submission")
End SyncLock
' use "newsub" here without synchronization
SyncLock doc
doc.Item("submissions").AppendChild(newsub)
End SyncLock
When adding the children of "newsub" then I would also synchronize only when creating each element.
As a followup to this question, would I be better off just synchronizing the entire building of the "newsub" object? The reason I think doing it like above is better is for performance, but I am not by any means an expert in whether I am actually making a meaningful impact on performance, or just complicating things.
In general, when using any class derived from XmlNode, you will need synchronization, as it's documentation explicitly states:
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
This means you'll need synchronization when appending children, as you've shown.
As a followup to this question, would I be better off just synchronizing the entire building of the "newsub" object? The reason I think doing it like above is better is for performance, but I am not by any means an expert in whether I am actually making a meaningful impact on performance, or just complicating things.
It depends - if you're going to be doing anything that may cause it to be usable from multiple threads, then you may need to synchronize it.
In your above code, it should be safe to work with newsub outside of the synchronization, since it's not part of the actual document tree until you append it as a child. This will reduce the amount of time where doc is locked, which could reduce contention if doc is being used from multiple threads.