I am trying to design my DOORS/DXL script to be efficient in terms of server usage.
I would like to know if multiple reads of certain values from a module cause multiple server actions.
Here is some pseudo code
for N objects in aModuleOtherThanTheCurrentlyOpenModule{
read object N's text
}
Lets say N = 1million. Will this cause 1million accesses to the server?
Thanks
If you load the module,
Module m = read(aModuleOtherThanTheCurrentlyOpenModule, false)
Then it should be loading the entire module to the local client. That would mean that accessing any of the information in the module would be client side at that point. Until you close it via,
close m
However, future releases of DOORS (i.e. DOORS Next Generation which has not yet been released) are unlikely to function this way.
Related
I am working on a PyQT5 app. I want to display a large dataset extracted from a MongoDB database.
To do so, I extract my collection in 3 cursors (I need to sort the display). However, until now, I was casting the cursors in a list, then emitting it.
But now my database grew significantly in size, and runtime became a major issue. Going through lists is time-consuming. Therefore, I am trying to find a way to directly access the cursor in its entirety without looping through all of it (I know it sounds a bit tricky since the cursor is just a reference to the collection)
An example of what is done now :
D1 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle":"Car" }]}))
D2 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle":{"$nin":["Car","Truck"]}}]}))
D3 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle" : "Truck"}]}).sort([("_id",-1)]).limit(60 - len(D1)))
extracted = D1 + D2 + D3
l_data_extracted.emit(extracted) # send the loaded data to the front
Loading 3 cursors are time-consuming, but then passing them in 1 list makes it heavier for the app.
I looked for resources about cursor management, but every time, I see answers which involve looping or casting (which I already do).
Would there be a way to directly emit the cursor and a bit like in C, pass the argument by reference to get the pointed data ? Or am I bound to loop/cast in the list due to its particular nature?
I need some help in creating a VI that generates virtual or calculated channels based on several channels I measure.
e.g.
I measure voltage on several AI, lets say, ch A,B,C,D,E were B,C and E represent current on a shunt and would like to calculate a the power of the system
Q[A] = B+C
R[W] = A*Q
S[W] = D*E
T[W] = R+S
I would like to load the equations externally from a configuration file that may vary from one project to another equations would come in a format of a string Q=A+B , R= A*Q .....
*(during a run equation and channel count don't change - only when loading config).
The main issues that I am facing is that the inputs to each equation may have dependencies on virtual channels that do not have data yet
Was trying to use:
formula nodes/ Math scripts: https://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/formula_nodes/
https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x30HCAQ&l=en-IL
All data that should be chunked into a data stream (continues sampling) that can be presented on a Chart/Graph and saved to CSV/TDMS
do I need some additional packages?
I have tried the following based on the the example given - getting strange result
Answer
The elements you are looking for are not the Formula/Math Nodes but rather the:
Formula Parsing VIs
Using these VIs you are able to pass a calculation in the form of a string and an array of variable names and then evaluate the formula. This allows for run-time variable scripting, where most other nodes require compile time formula evaluation (With the exception of the python node).
Example
Example of using a very simple program to evaluate two different calculations using the same values and variables.
I am currently developing my own programming language. The codebase (in Lua) is composed of several modules, as follows:
The first, error.lua, has no dependancies;
lexer.lua depends only on error.lua;
prototypes.lua also has no dependancies;
parser.lua, instead, depends on all the modules above;
interpreter.lua is the fulcrum of the whole codebase. It depends on error.lua, parser.lua, and memory.lua;
memory.lua depends on functions.lua;
finally, functions.lua depends on memory.lua and interpreter.lua. It is required from inside memory.lua, so we can say that memory.lua also depends on interpreter.lua.
With "A depends on B" I mean that the functions declared in A need those declared in B.
The real problem, though, is when A depends on B which depends on A, which, as you can understand from the list above, happens quite frequently in my code.
To give a concrete example of my problem, here's how interpreter.lua looks like:
--first, I require the modules that DON'T depend on interpreter.lua
local parser, Error = table.unpack(require("parser"))
--(since error.lua is needed both in the lexer, parser and interpreter module,
--I only actually require it once in lexer.lua and then pass its result around)
--Then, I should require memory.lua. But since memory.lua and
--functions.lua need some functions from interpreter.lua to work, I just
--forward declare the variables needed from those functions and then those functions themself:
--forward declaration
local globals, new_memory, my_nil, interpret_statement
--functions I need to declare before requiring memory.lua
local function interpret_block()
--uses interpret_statement and new_memory
end
local function interpret_expresion()
--uses new_memory, Error and my_nil
end
--Now I can safely require memory.lua:
globals, new_memory, my_nil = require("memory.lua")(interpret_block, interpret_espression)
--(I'll explain why it returns a function to call later)
--Then I have to fulfill the forward declaration of interpret_executement:
function interpret_executement()
--uses interpret_expression, new_memory and Error
end
--finally, the result is a function
return function()
--uses parser, new_fuction and globals
end
The memory.lua module returns a function so that it can receive interpret_block and interpret_expression as arguments, like this:
--memory.lua
return function(interpret_block, interpret_expression)
--declaration of globals, new_memory, my_nil
return globals, new_memory, my_nil
end
Now, I got the idea of the forward declarations here and that of the functions-as-modules (like in memory.lua, to pass some functions from the requiring module to the required module) here. They're all great ideas, and I must say that they work greatly. But you pay in readability.
In fact, breaking in smaller pieces the code this time made my work harder that it would have been if I coded everything in a single file, which is impossible for me because it's over than 1000 lines of code and I'm coding from a smartphone.
The feeling I have is that of working with spaghetti code, only on a larger scale.
So how could I solve the problem of my code being ununderstandable because of some modules needing each other to work (which doesn't involve making all the variables global, of course)? How would programmers in other languages solve this problem? How should I reorganize my modules? Are there any standard rules in using Lua modules that could also help me with this problem?
If we look at your lua files as a directed graph, where a vertice points from a dependency to its usage, the goal is to modify your graph to be a tree or forest, as you intend to get rid of the cycles.
A cycle is a set of nodes, which, traversed in the direction of the vertices can reach the starting node.
Now, the question is how to get rid of cycles?
The answer looks like this:
Let's consider node N and let's consider {D1, D2, ..., Dm} as its direct dependencies. If there is no Di in that set that depends on N either directly or indirectly, then you can leave N as it is. In that case, the set of problematic dependencies looks like this: {}
However, what if you have a non-empty set, like this: {PD1, ..., PDk} ?
You then need to analyze PDi for i between 1 and k along with N and see what is the subset in each PDi that does not depend on N and what is the subset of N which does not depend on any PDi. This way you can define N_base and N, PDi_base and PDi. N depends on N_base, just like all PDi elements and PDi depends on PDi_base along with N_base.
This approach minimalizes circles in the dependency tree. However, it is quite possible that a function set of {f1, ..., fl} exists in this group which cannot be migrated into _base as discussed due to dependencies and there are still cycles. In this case you need to give a name to the group in question, create a module for it and migrate all to functions into that group.
I want to store a list of variable names in a new local variable, such that I do not have to type a long list of variable names for each regression. I am using Stata 14.
E.g., I have the following 5 independent variables: a b c d e and one dependent variable: f
I don't want:
regress f a b c d e
But I want something like:
regress f allvar
How can I generate allvar?
Unfortunately, this does not work
local allvar a b c d e
The following works fine.
clear
set more off
sysuse auto
// first regressions
regress price mpg rep78 weight
// second regression
local allvars mpg rep78 weight
regress price `allvars'
Unless you show us something reproducible and/or more explicit, it's difficult to see what the problem is. A report only mentioning "does not work" is usually useless.
See also the keyword _all in help varlist.
You are using a local macro. If you are running the code by parts, then don't. You need to run the whole code, all at once. Read [P] macro, for details. An excerpt:
Local macros exist solely within the program or do-file in which they
are defined. If that program or do-file calls another program or
do-file, the local macros previously defined temporarily cease to
exist, and their existence is reestablished when the calling program
regains control. When a program or do-file ends, its local macros are
permanently deleted.
A common reason why your command sometimes "does not work" is that you ran your do-file line by line, rather than all in one go. A local macro is local to a session (hence the name). So if you ran the line local allvar a b c d e, then that will create that local macro and let it disapear as soon as Stata finished running that section of your .do file. There are two solutions:
You can get into the habit of running the definition of local macros and their use in one go. It is actually good practice to make many small .do files and make each .do file self-contained (see for example this excellent book), so you can easily just run the entire .do file each time you want to check or change something.
Alternatively, you can use global macros. These continue to exist after a session. As someone that programs in Stata, using global macros hurts my eyes, but I guess that if you use Stata only to analyse data it does little harm.
As an asside, allvar does not seem like a right name for that local macro: it does not contain all variables as it excludes the variable f. This sounds pedantic (and it is), but it is good practice to use names that accurately describe its content. In a real project we tend to come back to it after some time. A common scenarion is that you submitted a paper to a journal, it took half a year or more for the reviews to come in, and now you need to "read" your own .do-file to understand what you did half a year ago. At that point you are very happy that you were pedantic when writing the .do file...
As a further asside, assuming that a b c d e f are indeed all the variables in your dataset you can also create your local using:
ds f, not
local rhs `r(varlist)' // rhs short for right-hand side
In short, I'm trying to "sort" incoming results of using threadpooling as they finish. I have a functional solution, but there's no way in the world it's the best way to do it (it's prone to huge pauses). So here I am! I'll try to hit the bullet points of what's going on/what needs to happen and then post my current solution.
The intent of the code is to get information about files in a directory, and then write that to a text file.
I have a list (Counter.ListOfFiles) that is a list of the file paths sorted in a particular way. This is the guide that dictates the order I need to write to the text file.
I'm using a threadpool to collect the information about each file, create a stringbuilder with all of the text ready to write to the text file. I then call a procedure(SyncUpdate, inlcluded below), send the stringbuilder(strBld) from that thread along with the name of the path of the file that particular thread just wrote to the stringbuilder about(Xref).
The procedure includes a synclock to hold all the other threads until it finds a thread passing the correct information. That "correct" information being when the xref passed by the thread matches the first item in my list (FirstListItem). When that happens, I write to the text file, delete the first item in the list and do it again with the next thread.
The way I'm using the monitor is probably not great, in fact I have little doubt I'm using it in an offensively wanton manner. Basically while the xref (from the thread) <> the first item in my list, I'm doing a pulseall for the monitor. I originally was using monitor.wait, but it would eventually just give up trying to sort through the list, even when using a pulse elsewhere. I may have just been coding something awkwardly. Either way, I don't think it's going to change anything.
Basically the problem comes down to the fact that the monitor will pulse through all of the items it has in the queue, when there's a good chance the item I am looking for probably got passed to it somewhere earlier in the queue or whatever and it's now going to sort through all of the items again before looping back around to find a criteria that matches. The result of this is that my code will hit one of these and take a huge amount of time to complete.
I'm open to believing I'm just using the wrong tool for the job, or just not using tool I have correctly. I would strongly prefer some sort of threaded solution (unsurprisingly, it's much faster!). I've been messing around a bit with the Parallel Task functionality today, and a lot of the stuff looks promising, but I have even less experience with that vs. threadpool, and you can see how I'm abusing that! Maybe something with queue? You get the idea. I am directionless. Anything someone could suggest would be much appreciated. Thanks! Let me know if you need any additional information.
Private Sub SyncUpdateResource(strBld As Object, Xref As String)
SyncLock (CType(strBld, StringBuilder))
Dim FirstListitem As String = counter.ListOfFiles.First
Do While Xref <> FirstListitem
FirstListitem = Counter.ListOfFiles.First
'This makes the code much faster for reasons I can only guess at.
Thread.Sleep(5)
Monitor.PulseAll(CType(strBld, StringBuilder))
Loop
Dim strVol As String = Form1.Volname
Dim strLFPPath As String = Form1.txtPathDir
My.Computer.FileSystem.WriteAllText(strLFPPath & "\" & strVol & ".txt", strBld.ToString, True)
Counter.ListOfFiles.Remove(Xref)
End SyncLock
End Sub
This is a pretty typical multiple producer, single consumer application. The only wrinkle is that you have to order the results before they're written to the output. That's not difficult to do. So let's let that requirement slide for a moment.
The easiest way in .NET to implement a producer/consumer relationship is with BlockingCollection, which is a thread-safe FIFO queue. Basically, you do this:
In your case, the producer threads get items, do whatever processing they need to, and then put the item onto the queue. There's no need for any explicit synchronization--the BlockingCollection class implementation does that for you.
Your consumer pulls things from the queue and outputs them. You can see a really simple example of this in my article Simple Multithreading, Part 2. (Scroll down to the third example if you're just interested in the code.) That example just uses one producer and one consumer, but you can have N producers if you want.
Your requirements have a little wrinkle in that the consumer can't just write items to the file as it gets them. It has to make sure that it's writing them in the proper order. As I said, that's not too difficult to do.
What you want is a priority queue of some sort onto which you can place an item if it comes in out of order. Your priority queue can be a sorted list or even just a sequential list if the number of items you expect to get out of order isn't very large. That is, if you typically have only a half dozen items at a time that could be out of order, then a sequential list could work just fine.
I'd use a heap because it performs well. The .NET Framework doesn't supply a heap, but I have a simple one that works well for jobs like this. See A Generic BinaryHeap Class.
So here's how I'd write the consumer (the code is in pseudo-C#, but you can probably convert it easily enough).
The assumption here is that you have a BlockingCollection called sharedQueue that contains the items. The producers put items on that queue. Consumers do this:
var heap = new BinaryHeap<ItemType>();
foreach (var item in sharedQueue.GetConsumingEnumerable())
{
if (item.SequenceKey == expectedSequenceKey)
{
// output this item
// then check the heap to see if other items need to be output
expectedSequenceKey = expectedSequenceKey + 1;
while (heap.Count > 0 && heap.Peek().SequenceKey == expectedSequenceKey)
{
var heapItem = heap.RemoveRoot();
// output heapItem
expectedSequenceKey = expectedSequenceKey + 1;
}
}
else
{
// item is out of order
// put it on the heap
heap.Insert(item);
}
}
// if the heap contains items after everything is processed,
// then some error occurred.
One glaring problem with this approach as written is that the heap could grow without bound if one of your consumers crashes or goes into an infinite loop. But then, your other approach probably would suffer from that as well. If you think that's an issue, you'll have to add some way to skip an item that you think won't ever be forthcoming. Or kill the program. Or something.
If you don't have a binary heap or don't want to use one, you can do the same thing with a SortedList<ItemType>. SortedList will be faster than List, but slower than BinaryHeap if the number of items in the list is even moderately large (a couple dozen). Fewer than that and it's probably a wash.
I know that's a lot of info. I'm happy to answer any questions you might have.