I am using a program coded in Delphi 7 (sadly I cannot use a newer version for this program) which considers individual persons. For each person, I need to load in a bunch of values (0 to 90 at most, with the exact number depending on the person; non-fixed) which are used later in the code. After trying out a number of things, including loading in via Excel (which was horribly slow) someone suggested loading in the data through Access. I managed to get the following code so far:
MainConnection : TADOConnection;
Table : TADOTable;
StrConnection : String;
//I first open a connection to load the values in from
MainConnection:=TADOConnection.Create(nil);
StrConnection:='Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\test.mdb;Mode=Read;Persist Security Info=False;';
MainConnection.LoginPrompt:=False;
MainConnection.ConnectionString:= StrConnection;
MainConnection.Connected:=True;
Table:=TADOTable.Create(nil);
Table.Connection :=MainConnection;
Table.TableName := 'Sheet1';
Table.Open;
// I get the first three values which I absolutely require
Firstvalue := Table.Fields[0].value;
Secondvalue := Table.Fields[1].value;
Thirdvalue := Table.Fields[2].value;
//Whether I need additional values depends on the first and second values; if the first is a specific value
// I do not need any of the other ones
nrofvaluestoget := round((Secondvalue-Firstvalue));
if (Firstvalue = 100) then nrofvaluestoget:= 0;
if (nrofvaluestoget>0) then begin
for k := 0 to (nrofvaluestoget) do begin
Valueholder[k] := Table.Fields[5+k].value; ; // values for the valueholder
Table.Next; //Go to next person
This links the access database and technically does what I want. However, while it is quicker than loading in an excel file it's still quite slow due to the "nrofvaluestoget" loop. Skipping that and loading in all values for a person at once would speed up the process quite a bit.
As far as I'm aware this may be possible using a SQL query; something akin to: 'SELECT * FROM Sheet1'. However, I am not familiar with SQL, let alone linking it through Delphi 7. Is it even possible to get all the values at once and assign them immediately to the "Valueholder" with Delphi 7? Or at the very least, is there some way to speed up the code above that I'm not aware of? Any help would be much appreciated.
EDIT:
Per Juan's suggestion I added some additional descriptions with regards to the database.
I posted a picture as an example of the database as I was unable to embed one or create a decent looking table.
Let's say I have three persons. Person 1 would have 15 as a first age, and 16 as the second age.
In the current loop, Valueholder would have the value 2 at index 0, and value 0 at index 1. Person 1 has no further ages with values, so these are not considered in the loop.
When the next person is evaluated, all indices of Valueholder are set to their base value (blank).
Person 2 has 18 as the first age and 20 as the second. Valueholder then gets 3 values, namely: the value 8 at index 0, the value 4 at index 1 and the value 2 at index 2.
For the last person, all indices of Valueholder are again reset to their base value.
Person 3 has 100 as a first age; this is an indication that this person has no values which need to be loaded, so Valueholder is blank
I hope this clarifies the question a bit.
(if this is a one off import)
I would recommend exporting the data to a csv file and using TFileStream to read it in your Delphi program. This will be faster than having to connect to Access or SQL server or any database.
Related
I am working on a PyQT5 app. I want to display a large dataset extracted from a MongoDB database.
To do so, I extract my collection in 3 cursors (I need to sort the display). However, until now, I was casting the cursors in a list, then emitting it.
But now my database grew significantly in size, and runtime became a major issue. Going through lists is time-consuming. Therefore, I am trying to find a way to directly access the cursor in its entirety without looping through all of it (I know it sounds a bit tricky since the cursor is just a reference to the collection)
An example of what is done now :
D1 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle":"Car" }]}))
D2 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle":{"$nin":["Car","Truck"]}}]}))
D3 = list(self.collection.find({"$and":[ {"location": current_location}, {"vehicle" : "Truck"}]}).sort([("_id",-1)]).limit(60 - len(D1)))
extracted = D1 + D2 + D3
l_data_extracted.emit(extracted) # send the loaded data to the front
Loading 3 cursors are time-consuming, but then passing them in 1 list makes it heavier for the app.
I looked for resources about cursor management, but every time, I see answers which involve looping or casting (which I already do).
Would there be a way to directly emit the cursor and a bit like in C, pass the argument by reference to get the pointed data ? Or am I bound to loop/cast in the list due to its particular nature?
Would there be a way to program a parameter that is not hard coded to this?
In place of :SomeValue host variable in this question/snippet:
EXEC SQL
FETCH NEXT ROWSET FROM C_NJSRD2_cursor_declared_and_opened
FOR :SomeValue ROWS
INTO
:NJCT0022.SL_ISO2 :NJMT0022.iSL_ISO2
etc....
Here is some clarification:
Parametrization of the request like posted in opening question actually works in case I set the host variable :SomeValue to 1 and define host variable arrays for filling from database to size 1 like
struct
??<
char SL_ISO2 ??(1??) ??(3??); // sorry for Z/os trigraphs
etc..
And it also works if I set the host variable arrays to a larger defined integer value (i.e. 20) and hard code the value (:SomeValue) to that value in cursor rowset fetch.
EXEC SQL
FETCH NEXT ROWSET FROM C_NJSRD2
FOR 20 ROWS
INTO
:NJCT0022.SL_ISO2 :NJMT0022.iSL_ISO2
,:NJCT0022.BZ_COUNTRY :NJMT0022.iBZ_COUNTRY
,:NJCT0022.KZ_RISK :NJMT0022.iKZ_RISK
I wish to receive the number of rows from the calling program (COBOL), a and ideally set the size of host variable arrays accordingly. To avoid variable array sizing problem, oversizing host variable arrays to a larger value would be good also.
Those combinations return compile errors:
HOST VARIABLE ARRAY "NJCT0022" IS EITHER NOT DEFINED OR IS NOT USABLE
And in good tradition here is an answer to my own question.
The good people of SO will upvote me for sure now.
Rowsets are very fast and beside making host variable arrays arrays or for chars array arrays these cursors just require adjusting program functions for saving values and setting null values in loops. They are declared like this:
FETCH NEXT ROWSET FROM C_NJSRD2
FOR 19 ROWS
Rowset cursors can not change host array (that is array array) size dynamically.
Unlike scroll cursors they can not jump to position or go backwards.They can however go forward not by the whole preset rowset number of rows but just by a single row.
FETCH NEXT ROWSET FROM C_NJSRD2 FOR 1 ROWS
INTO
So to answer my question, to make the algorithm able to accept any kind row number requested for fetches it is basically just a question of segmenting the request to rowsets and eventually one line fetches until the requested number is met. To calculate loop counters for rowsets and single liners:
if((iRowCount>iRowsetPreset)&&
((iRowCount%iRowsetPreset)!=0))
??<
iOneLinersCount = iRowCount % iRowsetPreset;
iRowsetsCount = (iRowCount - iOneLinersCount)
/ iRowsetPreset;
??>
if ((iRowCount==iRowsetPreset) _OR_
((iRowCount%iRowsetPreset)==0))
??<
iOneLinersCount = 0;
iRowsetsCount = iRowCount / iRowsetPreset;
??>
if (iRowCount<iRowsetPreset)
??<
iOneLinersCount = iRowCount;
iRowsetsCount = 0;
??>
Ok first off please see the post below:
Updating a record in Elm
I'm more curious as to how that is actually possible since that makes the record a variable in effect, something functional programming tries to avoid?
What happened to my old bill? Someone basically deleted my x = 4 and made a new one x = boo_far?
Functional programming avoids mutation. In Elm, records are not mutated, they are copied.
Even saying that they are copied is a bit of a misrepresentation. They are not fully cloned byte for byte. That would be horribly inefficient. Their internal structure is more graph-like, allowing for efficient pointer-based operations that effectively extend the underlying structures without mutating already-existing nodes and edges when you perform operations that copy into a new record.
Conceptually speaking, it may help to think of it this way: Once you copy into a new record value, the old one sticks around forever. However, our computers don't have infinite memory, and those old values may often go permanently unused, so we leave it to Javascript's garbage collector to clean up those old pointers.
Consider the example in the answer given by #timothyclifford:
-- Create Bill Gates
billGates = { age = 100, name = "gates" }
-- Copy to Bill Nye
billNye = { bill | name = "Nye" }
-- Copy to a younger Bill Nye
youngBillNye = { billNye | age = 22 }
The internal representation could be thought of like this:
Conceptually, you can think of those living in perpetuity. However, let's say that billGates gets selected for garbage deletion because it is no longer being referenced (e.g. its frame is popped from the stack). The billGates pointer is deleted and the name=="gates" node is deleted, but all other nodes and edges remain untouched:
I have a function returning a setof records. This can be seen in this picture
.
I have a range of boards of length 2.8m thru to 4.9m (ln28 thru ln49 respectively) they have characteristics that set bits as seen in bincodes (9,2049,4097 etc.) For each given board length, I need to sum the number of boards for each bincode. EG in this case ln28 (bincode 4097) would = 3+17+14 = 34. Where you see brdsource = 128 series is where I intend to store these values, so for row brdsource 128, bincodes 4097, I want to store 34 in ln28.
You will see that I have 0's in ln28 values for all brdsource = 128. I have generated extra records as part of my setof records, and am trying to use a multidimensional array to add the values and keep track of them as seen above with array - Summary[boardlength 0-8][bincode 0-4].
Question 1 - I see that if I add 1 (for argument sake, it can be any number) to an array location, it returns a null value (no error, just nothing in table cell). However if I first set the array location to 0, then add 1, it works perfectly. How can an array defined as type integer hold a null value?
Question 2 - How do I add my respective record (call it rc) board length count to the array. IE I want to do something like this
if (rc.bincode = 4097) then Summary[0][2] := Summary[0][2] + rc.ln28;
and then later, on, when injecting this into my table (during brdsource = 128 phase) :
if (rc.bincode = 4097) then rc.ln28 := Summary[0][2];
Of course I may be going about this in a completely unorthodox way (though to me SQL is just plain unorthodox, sigh). I have made attempts to sum all previous records based on the required conditions (eg using a case(when...end) statement, but I proved what I already suspected, ie that each returned record is simply a single row of data. There is just no means of accessing data in the previous record lines as returned by the functions FOR LOOP...END LOOP.
A final note is that everything discussed here is occurring inside the function. I am not attempting to add records etc. to data returned by the function.
I am using PostgreSQL 9.2.9, compiled by Visual C++ build 1600, 64-bit. And yes I am aware this is an older version.
A question about a piece of lua code came up during a code review I had recently. The code in question is flushing a cache and reinitializing it with some data:
for filename,_ in pairs(fileTable) do
fileTable[filename] = nil
end
-- reinitialize here
Is there any reason the above loop shouldn't be replaced with this?
fileTable = { }
-- reinitialize here
It is due to table resize / rehash overhead. When a table is created, it is empty. When you insert an element, a rehash takes place and table size is grown to 1. The same happens, when you insert another element. The rule is that a table is grown whenever there's insufficient space (in either array or hash part) to hold another element. The new size is the smallest power of 2 that can accomodate required number of elements. E.g. rehash occurs when you insert an element, if a table holds 0, 1, 2, 4, 8, etc. elemnts.
Now the technique you're describing saves those rehashes, as Lua does not shrink tables. So when you have frequent fill / flush table operations it's better (performance wise) to do it like in your example than to create an empty table.
Update:
I've put up a little test:
local function rehash1(el, loops)
local table = {}
for i = 1, loops do
for j = 1, el do
table[j] = j
end
for k in ipairs(table) do table[k] = nil end
end
end
local function rehash2(el, loops)
for i = 1, loops do
local table = {}
for j = 1, el do
table[j] = j
end
end
end
local function test(elements, loops)
local time = os.time();
rehash1(elements, loops);
local time1 = os.time();
rehash2(elements, loops);
local time2 = os.time();
print("Time nils: ", tostring(time1 - time), "\n");
print("Time empty: ", tostring(time2 - time1), "\n");
end
Results are quit interesting. Running test(4, 10000000) on Lua 5.1 gave 7 seconds for nils and 10 seconds for empties. For tables bigger than 32 elements, the empty version was faster (the bigger the table, the bigger the difference). test(128, 400000) gave 9 seconds for nils and 5 seconds for empties.
Now on LuaJIT, where alloc and gc operations are relatively slow, running test(1024, 1000000) gave 3 seconds for nils and 7 seconds for empties.
P.S. Notice the sheer performance difference between plain Lua and LuaJIT. For 1024 element tables, plain Lua did 100,000 test iterations in about 20 seconds, LuaJIT did 1,000,000 iterations over 10 seconds!
Unless you have evidence otherwise, you'd be better off trusting Lua's garbage collection: just create a new, empty table when you need it.
Allocating a new table is a costly operation in Lua (which is true for any object allocation in pretty much any dynamic language). Additionally, constantly "losing" newly created to GC will put additional strain on performance, as well as memory, since every created table will still be in memory until GC actually comes to claim it.
Technique in your example trades those disadvantages for time required to explicitly remove all elements in table. This will always be a memory save and, depending on amount of elements, may often be a performance improvement as well.