Using a table for variable name in a table is not found when called for - variables

I am making quite the complex thing and I am trying to use tables as variable names cause I have found that lua works with it, that is:
lua
{[{1,2}]="Meep"}
The issue is it is callable, when I do it and try to call it using the same kind of table, it won't find it.
I have tried looking for it and such but I have no clue why it won't do this.
ua
local c = {[{1,2}]="Meep"}
print(c[{1,2}],c)
Do I expect to become but it does not.
"Meep",{[{1,2}]="Meep"}
but what I get is
nil,{[{1,2}]="Meep"}
If I however try
lua
local m={1,2}
local c = {[m]="Meep"}
print(c[m],c)
it becomes the correct one, is there a way to avoid that middle man? After all m=={1,2} will return true.

The problem you have is that tables in lua are represented as references. If you compare two different talbes you are comparing those references. So the equation only gets true if the given tables are exactly the same.
t = { 1, 2, 3 }
t2 = { 1, 2, 3 }
print(t == t) -- true
print(t2 == t) -- false
print(t2 == t2) -- true
Because of this fact, you can pass them in function per reference.
function f(t)
t[1] = 5
end
t2 = { 1 }
f(t2)
print(t2[1]) -- 5
To bypass this behavior, you could (like suggested in comments) serialize the table before using it as a key.

Related

Meaning of "Lua does not perform the primitive assignment." in 2.4 (concerning __newindex)

from https://www.lua.org/manual/5.3/manual.html
see section 2.4. Concerning the metamethod operation __newindex states the following quote:
__newindex: The indexing assignment table[key] = value. Like the index event, this event happens when table is not a table or when key is not
present in table. The metamethod is looked up in table.
Like with indexing, the metamethod for this event can be either a
function or a table. If it is a function, it is called with table,
key, and value as arguments. If it is a table, Lua does an indexing
assignment to this table with the same key and value. (This assignment
is regular, not raw, and therefore can trigger another metamethod.)
Whenever there is a __newindex metamethod, Lua does not perform the
primitive assignment. (If necessary, the metamethod itself can call
rawset to do the assignment.)
of that I ask what the follow specifically intends to say
"Lua does not perform the
primitive assignment. (If necessary, the metamethod itself can call
rawset to do the assignment.)"
Does this mean that if the value is a number, which is a primitive, it will not be assigned to the provided table through the metamethod event and we have to use rawget or something? This is very confusing and contradictory to me.
I want to show same examples to help you figure out this confusion.
The primitive assignment example:
local test = {}
test['x'] = 1 -- equal to rawset(test, 'x', 1)
print(test['x']) -- 1
print(rawget(test,'x')) -- 1
the primitive assignment code test['x'] = 1 equal to rawset(test, 'x', 1) when the table test have no __newindexmetamethod.
then the __newindex metamethod example:
local test = {}
setmetatable(test, {__newindex = function(t,key,value) end})
test['x'] = 1
print(test['x']) -- nil
print(rawget(test,'x')) -- nil
the assignment test['x'] = 1 will trigger to call the __newindex function.
if __newindex do nothing, then nothing happens, we will get nil result of test['x'].
If the __newindex function call rawset:
local test = {}
setmetatable(test, {
__newindex = function(t,key,value)
rawset(t,key,value) -- t:test key:'x' value:1
end})
test['x'] = 1
print(test['x']) -- 1
print(rawget(test,'x')) -- 1
the code have same effect as the first example.
So the manual say:
"Lua does not perform the primitive assignment. (If necessary, the metamethod itself can call rawset to do the assignment.)"
Then the problem is, how we can use __newindex?
It can be used to separate the old and new index in table.
local test = {y = 1}
local newtest = {}
setmetatable(test, {
__newindex =
function(t,key,value)
newtest[key] = value
end,
__index = newtest
})
test["x"] = 1
print(test['x']) -- 1
print(test['y']) -- 1
print(rawget(test, 'x')) -- nil
print(rawget(test, 'y')) -- 1
the old index 'x' and new index 'y' can all be accessed by test[key], and can be separated by rawget(test, key)

Struggling with simple boolean WHERE clause

Tired brain - perhaps you can help.
My table has two bit fields:
1) TestedByPCL and
2) TestedBySPC.
Both may = 1.
The user interface has two corresponding check boxes. In the code I convert the checks to int.
int TestedBySPC = SearchSPC ? 1 : 0;
int TestedByPCL = SearchPCL ? 1 : 0;
My WHERE clause looks something like this:
WHERE TestedByPCL = {TestedByPCL.ToString()} AND TestedBySPC = {TestedBySPC.ToString()}
The problem is when only one checkbox is selected I want to return rows having the corresponding field set to 1 or both fields set to 1.
Now when both fields are set to 1 my WHERE clause requires both check boxes to be checked instead of only one.
So, if one checkbox is ticked return records with with that field = 1 , regardless of whether the other field = 1.
Second attempt (I think I've got it now):
WHERE ((TestedByPCL = {chkTestedByPCL.IsChecked} AND TestedBySPC = {chkTestedBySPC.IsChecked})
OR
(TestedByPCL = 1 AND TestedBySPC = 1 AND 1 IN ({chkTestedByPCL.IsChecked}, {chkTestedBySPC.IsChecked})))
Misunderstood the question.
Change the AND to an OR:
WHERE TestedByPCL = {chkTestedByPCL.IsChecked} OR TestedBySPC = {chkTestedBySPC.IsChecked}
Also:
SQL Server does not have a Boolean data type, it's closest option is a bit data type.
The usage of curly brackets suggests using string concatenations to build your where clause. This might not be a big deal when you're handling checkboxes but it's a security risk when handling free text input as it's an open door for SQL injection attacks. Better use parameters whenever you can.

use associate array total value count Lua

i want to count the data type of each redis key, I write following code, but run error, how to fix it?
local detail = {}
detail.hash = 0
detail.set = 0
detail.string = 0
local match = redis.call('KEYS','*')
for i,v in ipairs(match) do
local val = redis.call('TYPE',v)
detail.val = detail.val + 1
end
return detail
(error) ERR Error running script (call to f_29ae9e57b4b82e2ae1d5020e418f04fcc98ebef4): #user_script:10: user_script:10: attempt to perform arithmetic on field 'val' (a nil value)
The error tells you that detail.val is nil. That means that there is no table value for key "val". Hence you are not allowed to do any arithmetic operations on it.
Problem a)
detail.val is syntactic sugar for detail["val"]. So if you expect val to be a string the correct way to use it as a table key is detail[val].
Possible problem b)
Doing a quick research I found that this redis call might return a table, not a string. So if detail[val] doesn't work check val's type.

Handle null values within SQL IN clause

I have following sql query in my hbm file. The SCHEMA, A and B are schema and two tables.
select
*
from SCHEMA.A os
inner join SCHEMA.B o
on o.ORGANIZATION_ID = os.ORGANIZATION_ID
where
case
when (:pass = 'N' and os.ORG_ID in (:orgIdList)) then 1
when (:pass = 'Y') then 1
end = 1
and (os.ORG_SYNONYM like :orgSynonym or :orgSynonym is null)
This is a pretty simple query. I had to use the case - when to handle the null value of "orgIdList" parameter(when null is passed to sql IN it gives error). Below is the relevant java code which sets the parameter.
if (_orgSynonym.getOrgIdList().isEmpty()) {
query.setString("orgIdList", "pass");
query.setString("pass", "Y");
} else {
query.setString("pass", "N");
query.setParameterList("orgIdList", _orgSynonym.getOrgIdList());
}
This works and give me the expected output. But I would like to know if there is a better way to handle this situation(orgIdList sometimes become null).
There must be at least one element in the comma separated list that defines the set of values for the IN expression.
In other words, regardless of Hibernate's ability to parse the query and to pass an IN(), regardless of the support of this syntax by particular databases (PosgreSQL doesn't according to the Jira issue), Best practice is use a dynamic query here if you want your code to be portable (and I usually prefer to use the Criteria API for dynamic queries).
If not need some other work around like what you have done.
or wrap the list from custom list et.

Saving state of closure in Groovy

I would like to use a Groovy closure to process data coming from a SQL table. For each new row, the computation would depend on what has been computed previously. However, new rows may become available on further runs of the application, so I would like to be able to reload the closure, initialised with the intermediate state it had when the closure was last executed in the previous run of the application.
For example, a closure intending to compute the moving average over 3 rows would be implemented like this:
def prev2Val = null
def prevVal = null
def prevId = null
Closure c = { row ->
println([ prev2Val, prevVal, prevId])
def latestVal = row['val']
if (prev2Val != null) {
def movMean = (prev2Val + prevVal + latestVal) / 3
sql.execute("INSERT INTO output(id, val) VALUES (?, ?)", [prevId, movMean])
}
sql.execute("UPDATE test_data SET processed=TRUE WHERE id=?", [row['id']])
prev2Val = prevVal
prevVal = latestVal
prevId = row['id']
}
test_data has 3 columns: id (auto-incremented primary key), value and processed. A moving mean is calculated based on the two previous values and inserted into the output table, against the id of the previous row. Processed rows are flagged with processed=TRUE.
If all the data was available from the start, this could be called like this:
sql.eachRow("SELECT id, val FROM test_data WHERE processed=FALSE ORDER BY id", c)
The problem comes when new rows become available after the application has already been run. This can be simulated by processing a small batch each time (e.g. using LIMIT 5 in the previous statement).
I would like to be able to dump the full state of the closure at the end of the execution of eachRow (saving the intermediate data somewhere in the database for example) and re-initialise it again when I re-run the whole application (by loading those intermediate variable from the database).
In this particular example, I can do this manually by storing the values of prev2Val, prevVal and prevId, but I'm looking for a generic solution where knowing exactly which variables are used wouldn't be necessary.
Perhaps something like c.getState() which would return [ prev2Val: 1, prevVal: 2, prevId: 6] (for example), and where I could use c.setState([ prev2Val: 1, prevVal: 2, prevId: 6]) next time the application is executed (if there is a state stored).
I would also need to exclude sql from the list. It seems this can be done using c.#sql=null.
I realise this is unlikely to work in the general case, but I'm looking for something sufficiently generic for most cases. I've tried to dehydrate, serialize and rehydrate the closure, as described in this Groovy issue, but I'm not sure how to save and store all the # fields in a single operation.
Is this possible? Is there a better way to remember state between executions, assuming the list of variables used by the closure isn't necessarily known in advance?
Not sure this will work in the long run, and you might be better returning a list containing the values to pass to the closure to get the next set of data, but you can interrogate the binding of the closure.
Given:
def closure = { row ->
a = 1
b = 2
c = 4
}
If you execute it:
closure( 1 )
You can then compose a function like:
def extractVarsFromClosure( Closure cl ) {
cl.binding.variables.findAll {
!it.key.startsWith( '_' ) && it.key != 'args'
}
}
Which when executed:
println extractVarsFromClosure( closure )
prints:
['a':1, 'b':2, 'c':4]
However, any 'free' variables defined in the local binding (without a def) will be in the closures binding too, so:
fish = 42
println extractVarsFromClosure( closure )
will print:
['a':1, 'b':2, 'c':4, 'fish':42]
But
def fish = 42
println extractVarsFromClosure( closure )
will not print the value fish