I have to store records in a "Analyze" table which are referenced to a Object with a defined Id (nvarchar20).
I came up with two possible designs (see Option A or B) for the Analyze Table:
What I'm not sure about is weather its better to store the primary key of the different objects (ObjectA, ObjectB, ...) in separate columns or simply store the plain ObjectId
The Analyze Table is growing very fast and most operations are read via a given ObjectId. So most cases you have a ObjectId and have to search the Analyze Table.
The pattern of the ObjectId is always the same: for example you can identify a given ObjectIds Type if you look at the eight to fourth last characters CA16834K23850001ABCD
0001 is always ObjectA
0002 is always ObjectB
ObjectA
| PK (bigInt) | ObjectId (nvarchar20) | otherfields |
| ------------| --------------------- | ------------|
| 1 | CA16834K23850001ABCD | .. |
| 2 | CA16834K23850001ABCE | .. |
ObjectB
| PK (bigInt) | ObjectId (nvarchar20) | otherfields |
| ----------- | --------------------- | ----------- |
| 1 | CA16834K23850002ABCD | .. |
| 2 | CA16834K23850002ABCE | .. |
Option A:
AnalyzeTable
| id (bigInt) | ObjA_PK (bigInt)| ObjB_PK (bigInt)| otherfields... |
| ----------- | --------------- | --------------- | -------------- |
| 1 | 1 | NULL | ... |
| 2 | NULL | 1 | ... |
Option B:
AnalyzeTable
| id (bigInt) | ObjectId (nvarchar20) | otherfields... |
| ----------- | --------------------------- | --------------- |
| 1 | CA16834K23850001ABCD | ... |
| 2 | CA16834K23850002ABCD | ... |
What is the better design for reading the AnalyzeTable. Since numeric indexes might be faster than an index on an nvarchar?
Related
I have a couple of views in PostgreSQL that return tables like these:
tickets_new
| id | queue | owner | subject | status | created |
| -- | ----- | ----- | -------- | ------ | ------------------- |
| 1 | 1 | 123 | Subject1 | new | 2022-08-22 16:57:26 |
| 2 | 1 | 345 | Subject2 | new | 2022-08-22 13:24:09 |
tickets_handled
| id | queue | owner | subject | priority | status | created |
| -- | ----- | ----- | -------- | -------- | ------ | ------------------- |
| 3 | 4 | 234 | Subject3 | 0 | open | 2022-08-09 16:57:26 |
| 6 | 4 | 45 | Subject6 | 0 | open | 2022-08-13 13:24:09 |
tickets_planworks
| id | subject | status | starts | due |
| -- | ---------- | ------ | ------------------- | ------------------- |
| 12 | Planworks1 | open | 2022-08-23 21:01:00 | 2022-08-23 23:00:00 |
| 20 | Planworks2 | open | 2022-08-23 21:01:00 | 2022-08-23 23:00:00 |
Then there's table objectcustomfieldvalues with structure:
objectcustomfieldvalues
| id | customfield | objectid | content |
| -- | ----------- | -------- | -------------------- |
| 1 | 1 | 3 | Ticket3_Client |
| 2 | 5 | 2 | Ticket2_Interaction |
| 3 | 13 | 6 | Ticket6_Detalisation |
objectid links to id in tickets views
For example I try to join objectcustomfieldvalues with tickets_handled view with this query:
SELECT tt.id, tt.queue, tt.owner, tt.subject, tt.status,
tt.created,
cf_client.content AS client,
cf_interaction.content AS interaction,
cf_detalisation.content AS detalisation
FROM tickets_handled tt
LEFT JOIN
(SELECT objectid, content
FROM objectcustomfieldvalues
WHERE objectid IN (SELECT id FROM tickets_handled)
AND customfield = '1') cf_client
ON cf_client.objectid=tt.id
LEFT JOIN
(SELECT objectid, content
FROM objectcustomfieldvalues
WHERE objectid IN (SELECT id FROM tickets_handled)
AND customfield = '5') cf_interaction
ON cf_interaction.objectid=tt.id
LEFT JOIN
(SELECT objectid, content
FROM objectcustomfieldvalues
WHERE objectid IN (SELECT id FROM tickets_handled)
AND customfield = '13') cf_detalisation
ON cf_detalisation.objectid=tt.id
And it results to table:
tickets_handled
| id | queue | owner | subject | priority | status | created | client | interaction | detalisation |
| -- | ----- | ----- | -------- | -------- | ------ | ------------------- | -------------- | ----------- | -------------------- |
| 3 | 4 | 234 | Subject3 | 0 | open | 2022-08-09 16:57:26 | Ticket3_Client | | |
| 6 | 4 | 45 | Subject6 | 0 | open | 2022-08-13 13:24:09 | | | Ticket6_Detalisation |
I want to have a procedure or somewhat where i can send a variable with view name (eg tickets_handled) and return table with fields from that view plus fields from table objectcustomfieldvalues linked to tickets in view.
Now when I write join query I have to mention all the fields for each of my views, also repeat view name in selects and write multiple joins for each customfield joined.
In PostgreSQL, I have a table that looks like,
| id | json |
| -- | ------------------------------- |
| 1 | {"id":1,"customer":"BANK"} |
| 1 | {"id":1,"customer":"BANK"} |
| 2 | {"id":2,"customer":"GOVT"} |
| 3 | {"id":3,"customer":"BANK"} |
| 4 | {"id":4,"customer":"ASSET MGR"} |
| 4 | {"id":4,"customer":"ASSET MGR"} |
I need the output of counting the occurrences of customers with unique ids, such as
| customer | count |
| ----------- | ----- |
| "BANK" | 2 |
| "GOVT" | 1 |
| "ASSET MGR" | 1 |
Is there a good way to achieve using PostgreSQL & json? I currently am able to extract the customer and IDs, but am having difficulty counting the unique json objects.
select count(distinct id), jsondata ->> 'customer' customer
from data
group by customer
count | customer
----: | :--------
1 | ASSET MGR
2 | BANK
1 | GOVT
db<>fiddle here
I have the following table:
+----+---------+-------+
| id | Key | Value |
+----+---------+-------+
| 1 | name | Bob |
| 1 | surname | Test |
| 1 | car | Tesla |
| 2 | name | Mark |
| 2 | cat | Bobby |
+----+---------+-------+
Key can hold basically anything. I would like to arrive at the following output:
+----+------+---------+-------+-------+
| id | name | surname | car | cat |
+----+------+---------+-------+-------+
| 1 | Bob | Test | Tesla | |
| 2 | Mark | | | Bobby |
+----+------+---------+-------+-------+
Then I would like to merge the output with another table (based on the id).
Is it possible to do, if I don't know what the Key column holds? Values there are dynamic.
Could you point me to the right direction?
Using standard SQL in bigquery:
Given a table such as: Where the values have been counted so only appear once
| id | key | value |
--------------------
| 1 | read | aa |
| 1 | read | bb |
| 1 | name | abc |
| 2 | read | bb |
| 2 | read | cc |
| 2 | name | def |
| 2 | value| some |
| 3 | read | aa |
How can I make it so each row is one user and their respective values? e.g. NEST
So the table would look like:
| id | key | value |
--------------------
| 1 | read | aa |
| | read | bb |
| | name | abc |
| 2 | read | bb |
| | read | cc |
| | name | def |
| | value| some |
| 3 | read | aa |
I've tried using ARRAY_AGG on the column, which ends up listing all the values of that column.
I just need to have each row as a single user with multiple values, as shown above.
Like BigQuery does here, this is what I want it to look like:
Below is for BigQuery Standard SQL
#standardSQL
SELECT id, ARRAY_AGG(STRUCT(key AS key, value AS value)) params
FROM `project.dataset.table`
GROUP BY id
if to apply to your sample data - result is
Here is what I want to do:
I have this table
+----+-------------+
| id | data |
+----+-------------+
| 1 | max |
| 2 | linda |
| 3 | sam |
| 4 | henry |
+----+-------------+
and I want to Update the data with concatenating Id column with data, which will look like this:
+----+-------------+
| id | data |
+----+-------------+
| 1 | max1 |
| 2 | linda2 |
| 3 | sam3 |
| 4 | henry4 |
+----+-------------+
Sounds like this is basically what you want (T-SQL, Other platforms may have different methods for type conversion and concatenation):
update myTable
set data=data+convert(varchar(50),id)