I have a field in my postgres db called authorizedUserNumber. This field is set by default to 0, and does not auto-increment, because it is only asigned when a user has been fully onboarded.
Okay, so let's say a new user has been fully onboarded and I want to assign a unique number to the field authorizedUserNumber. In the event I have multiple servers running, I want to detect collisions of unique numbers in this field, so as to protect against race conditions.
I thought of defining authorizedUserNumber as a Sequelize unique field, and trying something like this:
// get current max authorizedUserNumber
let userWithMaxOnboardedauthorizedUserNumber = await connectors.usersClinical.findAll({
attributes: [
sequelize.fn('MAX', sequelize.col('authorizedUserNumber'))
],
});
let newLatestauthorizedUserNumber = userWithMaxOnboardedauthorizedUserNumber[0].authorizedUserNumber;
newLatestauthorizedUserNumber += 1;
let numAttempts = 0
let done = false;
while ((!done) && (numAttempts <= 50)){
try{
user = await updateUser(user.userId, {authorizedUserNumber: newLatestauthorizedUserNumber});
done = true;
}catch(e){
// a unique field will throw an error if you try to store a duplicate value to it
console.log(`Collision in assigning unique authorizedUserNumber. UserId: ${user.userId}`);
newLatestauthorizedUserNumber += 1;
numAttempts+= 1;
}
}
if (!done){
console.error(`Could not assign unique authorizedUserNumber. UserId: ${user.userId}`);
}
The problem with this code, is that if the field authorizedUserNumber is defined as unique, then I can't put a default value in it. So there's no way to have it be empty prior to having the correct value placed in it.
What's the best practice for dealing with this sort of situation?
UPDATE:
Thanks to #Belayer for providing the solution.
Here are some notes on how I implemented it in Sequelize/Postgres.
Sequelize, AFAICT, does not yet support sequences. So I used a raw query in Sequelize to create the sequence:
let sql = `
CREATE SEQUENCE authorizedUserNumber_seq
START WITH 1
INCREMENT BY 1;`
let result;
try{
result = await db.query(sql);
console.log(`sql code to create authorizedUserNumber_seq has been run successfully.`)
}
catch(e){
result = null;
console.error(`Error in creating authorizedUserNumber_seq.`)
}
Then when it's time to authorize the new user and assign a unique user number, I again use a raw query, with the following sql:
let sql = `UPDATE usersClinical
SET "authorizedUserNumber" = nextval('authorizedUserNumber_seq')
WHERE "userId" = '${user.userId}';`
Rather than defaulting to 0 just let the column be null when not set. Since the default is null there can be any number of them without violating a unique constraint. Then create a sequence for that column (do not set as column default). There is no requirement for a sequence to auto-increment, the nextval can be assigned when needed. Make the assignment from the sequence when the new user becomes fully on-boarded.
create table users ( id integer generated always as identity
, name text
, assignedid integer
, constraint assigned_uk unique (assignedid)
) ;
create sequence user_assigned_seq;
You can even make the assignment when user is created if desired. (see demo )
Instead of creating a unique constraint, you can create a unique index like this:
CREATE UNIQUE INDEX ON tab (nullif(authorizedUserNumber, 0));
Related
Currently, I have a GORM query that calculates a counter for each table entry using a second table and returns the first table with a new field "locks_total" which doesn't exist in the original.
Now, what I want to achieve is the same table returned (with the new "locks_total" field) but filtered with "locks_total" = 0.
I can't seem to make it happen because it doesn't recognize this field in the table, what can I do to make it happen? is it possible to run the query and then execute the filter on the new result table?
This is how we currently do it-
txn := dao.postgresManager.DB().Model(&models.SecretMetadataResponse{})
txn.Table(dao.firstTableName + " as s").
Select("s.*, (SELECT COUNT(*) FROM " + dao.secondTableName + " as l where s.id = l.secret_id) locks_total ")
var secretsTotal int64
var metadataResponseEntries []models.SecretMetadataResponse
txn = txn.Count(&secretsTotal). // Saving the count before trimming according to the pagination parameters
Limit(params.Limit).
Offset(params.Offset).
Order(constants.FieldName).
Order(constants.FieldId).
Find(&metadataResponseEntries)
When the SecretMetadataResponse contains the SecretMetadata which is the same fields as in the table and the LocksTotal is the new calculated field that we want to have.
type SecretMetadataResponse struct {
SecretMetadata
LocksTotal int `json:"locks_total"`
}
Thanks in advance :)
We have a storage table where we want to add a new integer column (It is in fact an enum of 3 values converted to int). We want a row to be required when:
It is an older row and the column does not exist
It is a new row and the column exists and does not match a particular value
When I just use a not equal operator on the column the old rows do not get returned. How can this be handled?
Update
Assuming a comparison always returns false for the non-existent column I tried somethinglike below (the value of the property will be always > 0 when it exists), which does not work either:
If the (Prop GreaterThanOrEqual -1) condition returns false I assume the value is null.
If not then, the actual comparison happens.
string propNullCondition = TableQuery.GenerateFilterConditionForInt(
"Prop",
QueryComparisons.GreaterThanOrEqual,
-1);
propNullCondition = $"{TableOperators.Not}({propNullCondition})";
string propNotEqualValueCondition = TableQuery.CombineFilters(
propNullCondition,
TableOperators.Or,
TableQuery.GenerateFilterConditionForInt(
"Prop",
QueryComparisons.NotEqual,
XXXX));
Note: The table rows written so far do not have "Prop" and only new rows will have this column. And expectation is the query should return all old rows and the new ones only when Prop != XXXX.
It seems that your code is correct, maybe there is a minor error there. You can follow my code below, which works fine as per my test:
Note: in the filter, the column name is case-sensitive.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable table = tableClient.GetTableReference("test1");
string propNullCondition = TableQuery.GenerateFilterConditionForInt(
"prop1", //note the column name shoud be case-sensitive here.
QueryComparisons.GreaterThanOrEqual,
-1);
propNullCondition = $"{TableOperators.Not}({propNullCondition})";
TableQuery<DynamicTableEntity> propNotEqualValueCondition = new TableQuery<DynamicTableEntity>()
.Where(
TableQuery.CombineFilters(
propNullCondition,
TableOperators.Or,
TableQuery.GenerateFilterConditionForInt(
"prop1",//note the column name shoud be case-sensitive here.
QueryComparisons.NotEqual,
2)));
var query = table.ExecuteQuery(propNotEqualValueCondition);
foreach (var q in query)
{
Console.WriteLine(q.PartitionKey);
}
The test result:
Here is my table in azure:
I have a query that returns multiple rows:
select id,status from store where last_entry = <given_date>;
The returned rows look like:
id status
-----------------
1131A correct
1132B incorrect
1134G empty
I want to store the results like this:
$rows = [
{
ID1 => '1131A',
status1 => 'correct'
},
{
ID2 => '1132B',
status2 => 'incorrect'
},
{
ID3 => '1134G',
status3 => 'empty'
}
];
How can I do this?
What you are looking for is a hash of hash in Perl. What you do is
Iterate over the results of your query.
Split each entry by tab
Create a hash with the id as key and status as value
Now to store the hash created by each such query you create another hash. Here the key could be something like 'given_date' in your case so you could write
$parent_hash{given_date}=\%child_hash
This will results in the parent hash having a reference of each query result.
For more you can refer to these resources:
http://perldoc.perl.org/perlref.html
http://www.thegeekstuff.com/2010/06/perl-array-reference-examples/
Have a look at DBI documentation.
Here is part of script that does what you want:
my $rows;
while(my $hash_ref = $sth->fetchrow_hashref) {
push #$rows, $hash_ref;
}
You can do this by passing a Slice option to DBI's selectall_arrayref:
my $results = $dbh->selectall_arrayref(
'select id,status from store where last_entry = ?',
{ Slice => {} },
$last_entry
);
This will return an array reference with each row stored in a hash. Note that since hash keys must be unique, you will run into problems if you have duplicate column names in your query.
This is the kind of question that raises an immediate red flag. It's somewhat of an odd request to want a collection (array/array reference) of data structures that are heterogeneous---that's the whole point of a collection. If you tell us what you intend to do with the data rather than what you want the data to look like, we can probably suggest a better solution.
You want something like this:
# select the data as an array of hashes - retured as an arrayref
my $rows = $dbh->selectall_arrayref($the_query, {Slice => {}}, #any_search_params);
# now make the id keys unique
my $i = 1;
foreach my $row ( #$rows) {
# remove each column and assign the value to a uniquely named column
# by adding a numeric suffix
$row->{"ID" . $i} = delete $row->{ID};
$row->{"status" . $i} = delete $row->{status};
$i += 1;
}
Add your own error checking.
So you said "save as a hash," but your example is an array of hashes. So there would be a slightly different method for a hash of hashes.
I am new to sharepoint, I have a custom field type derived from SpFieldChoice , my field allows users to select multiple values, I have a requirement of replacing some old custom columns with the new column and copy the data in old column to the new column. the old column also allows the users to select multiple values by ticking checkboxes, I have the following code to copy the data to new field.
foreach (SPListItem item in list.Items)
{
if (item[oldField.Title] == null)
{
item[newFld.Title] = string.Empty;
item.Update();
}
else
{
string[] itemvalues = item[oldField.Title].ToString().Split(new string[] {";#"}, StringSplitOptions.None);
StringBuilder multiLookupValues = new StringBuilder();
multiLookupValues.Append(";#");
for (int cnt = 0; cnt < (itemvalues.Length) / 2; cnt++)
{
multiLookupValues.Append (itemvalues[(cnt * 2) + 1].ToString() + ";#");
}
item[newFld.Title] = multiLookupValues.ToString();
item.SystemUpdate(false) ;
}
}
This code works fine until the length of resulting stringbuilder is less than 255 charachters , but when this length is greater then 255 I get the following Exception.
Invalid choice Value. A choice field contains invalid data.Please check the value and try again.
Is there any other way of copying data to SpFiledChoice, How can I resolve this problem? please help me.
Do the update multiple times so that the string doesn't exceed - i.e. value +=. However, if the problem is that the value can't be longer that 255, you have to consider how you are doing the choices. If it is exceeding the length and updating the value multiple times doesn't work (and a Site Column will have the same limitations), you can do the next best thing:
1) Create a new list that will hold the choices
2) Change the destination field to be a lookup
3) Update accordingly for each item (picking up the ID from the lookup field)
There's no limit to this.
David Sterling
david_sterling#sterling-consulting.com
www.sterling-consulting.com
I have an URL pointing to content and I need to get the highest value contained in one of the columns. Is there any aggregate function that will accomplish that or do I have to do this manually?
If you're querying an Android content provider, you should be able to achieve this by passing MAX(COLUMN_NAME) in to the selection parameter of ContentResolver.query:
getContentResolver().query(uri, projection, "MAX(COLUMN_NAME)", null, sortOrder);
Where Uri is the address of the content provider. This should return the single row with the highest value in COLUMN_NAME.
Android's database uses SQLite, so SELECT MAX(thecolumn) FROM TheTable should work, just like in any other SQLite implementation (or for that matter any other SQL, "ite" or not;-). (If you're not using android.database you'd better specify what you're using instead;-).
That worked for me.
Based on the responses of #Reto Meier and #Florian von Stosch.
public static long getMaxId(Context context) {
long maxId = 0;
Cursor maxCursor = context.getContentResolver().query(
ProviderContentContract.CONTENT_URI,
new String[]{"MAX(" + Table._ID + ")"},
null,
null,
null);
if (maxCursor != null && maxCursor.moveToFirst()) {
maxId = maxCursor.getInt(0);
maxCursor.close();
}
return maxId;
}