When I use "show databases" command in taos shell, I see there are a lot of database parameters, like keep, days, cache, blocks
taos> show databases;
name | created_time | ntables | vgroups | replica | quorum | days | keep0,keep1,keep(D) | cache(MB) | blocks | minrows | maxrows | wallevel | fsync | comp | cachelast | precision | update | status |
====================================================================================================================================================================================================================================================================================
test | 2021-05-26 17:33:17.338 | 1 | 1 | 1 | 1 | 10 | 3650,3650,3650 | 16 | 6 | 100 | 4096 | 1 | 3000 | 2 | 0 | ms | 0 | ready |
Query OK, 1 row(s) in set (0.001774s)
To make best practice of TDengine database, how should I adjust these databases parameters?
You can try "ALTER DATABASE db_name KEEP value;".
Related
I am using Teradata SQL Assistant Version TD 16.10.06.01 ...
I have seen a lot people transpose data for set smallish tables but I am working on thousands of clients and need the break the columns up into Line Item Values to compare orders/highlight differences between orders. Problem is it is all horizontally linked and I need to transpose it to Id,Transaction id,Version and Line Item Value 1, Line Item Value 2... then another column comparing values to see if they changed.
example:
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| Id | First Name | Last Name | DOB | transaction id | Make | Location | Postcode | Year | Price |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| 1 | John | Smith | 15/11/2001 | 1654654 | Audi | NSW | 2222 | 2019 | $ 10,000.00 |
| 2 | Mark | White | 11/02/2002 | 1661200 | BMW | WA | 8888 | 2016 | $ 8,999.00 |
| 3 | Bob | Grey | 10/05/2002 | 1667746 | Ford | QLD | 9999 | 2013 | $ 3,000.00 |
| 4 | Phil | Faux | 6/08/2002 | 1674292 | Holden | SA | 1111 | 2000 | $ 5,800.00 |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
hoping to change the data to :
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| id | trans_id | Vers_ord | Item Val | Ln_Itm_Dscrptn | Org_Val | Updt_Val | Amndd_Ord_chck | Lbl_Rnk | ... |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| 1 | 1654654 | 2 | 11169 | Make | Audi BLK | Audi WHT | Yes | 1 | |
| 1 | 1654654 | 2 | 11189 | Location | NSW | WA | Yes | 2 | |
| 1 | 1654654 | 2 | 23689 | Postcode | 2222 | 6000 | Yes | 3 | |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
Recently with smaller data I created a table added in Values then used a case statement when value 1 then xyz with a product join ... and the data warehouse admins didn't mention anything out of order. but I only had row 16 by 200 column table to transpose ( Sum, Avg, Count, Median(function) x 4 subsets of clients) , which were significantly smaller than my current tables to make comparisons with.
I am worried my prior method will probably slow the data Warehouse down, plus take me significant amount of time to type the SQL.
Is there a better way to transpose large tables?
I changed a but the context, but it's basically the same issue.
Imagine we are in a never-ending tunnel, shaped like a circle. We split every section of the circle, from 1 to 10 and we'll call each section slot (sl). There are 2 groups (gr) of living things walking in the tunnel. Each group has 2 bands, where each has a name and global hitpoints (hp). Every group is walking forward (although the bands might change order). If a group is at slot #10 and moves forward, he will be at slot #1. We snapshot their information every day. All the data gathered is stored in a table with this structure:
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
| day_id | | gr_1_sl_1_id | | gr_1_sl_1_name | | gr_1_sl_1_hp | | gr_1_sl_2_id | | gr_1_sl_2_name | | gr_1_sl_2_hp | | gr_2_sl_1_id | | gr_2_sl_1_name | | gr_2_sl_1_hp | | gr_2_sl_2_id | | gr_2_sl_2_name | | gr_2_sl_2_hp | |
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
| 1 | 3 | orc | 100 | 4 | goblin | 10 | 10 | human | 50 | 1 | dwarf | 25 | |
| 2 | 6 | goblin | 7 | 7 | orc | 76 | 2 | human | 60 | 3 | dwarf | 28 | |
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
As you can see, the columns are structured in a sequential way, while the data shows what is the actual value. What I want is to have the information shaped this way instead:
+---------+-------+-------+-----------+---------+
| id_game | gr_id | sl_id | band_name | band_hp |
+---------+-------+-------+-----------+---------+
| 1 | 1 | 3 | orc | 100 |
| 1 | 1 | 4 | goblin | 10 |
| 1 | 2 | 10 | human | 50 |
| 1 | 2 | 1 | dwarf | 25 |
| 2 | 1 | 6 | goblin | 7 |
| 2 | 1 | 7 | orc | 76 |
| 2 | 2 | 2 | human | 60 |
| 2 | 2 | 3 | dwarf | 28 |
+---------+-------+-------+-----------+---------+
I have this information in power bi, although I can create views in sql server if need be. I have tried many things, closest thing I got was unpivoting and parsing the original columns to get day_id, gr_id, sl_id, attributes and values. In attributes and values, it's basically name and hp with their corresponding value (I changed hp into string), but then I'm stocked, I'm not sure what to do next.
Anyone has any ideas ? Keep in mind that I oversimplified the problem; there are more groups, more slots, more bands and more statistics (i.e. attack and defense rating, etc.)
You seem to want to unpivot the table. In SQL Server, I recommend using apply:
select t.day_id, v.*
form t cross apply
(values (1, 1, gr_1_sl_1_id, gr_1_sl_1_name, gr_1_sl_1_hp),
(1, 2, gr_1_sl_2_id, gr_1_sl_2_name, gr_1_sl_2_hp),
(2, 1, gr_2_sl_1_id, gr_1_sl_1_name, gr_2_sl_1_hp),
(2, 2, gr_2_sl_2_id, gr_1_sl_2_name, gr_2_sl_2_hp)
) v(id_game, gr_id, sl_id, band_name, band_hp);
In other databases, you can do something similar with union all.
I'm trying to implement Row Level Security in SQL Server 2016.
The problem is, I can have multiple users that should have read permissions over given rows, and when I write some complex condition in the predicate the performance gets like very very very bad.
I tried to keep all usernames in one column of the table and in the predicate to search through them for the SYSTEM_USER with % LIKE % but performance is low.
Example of the values in the Usernames column in my controlled table for one row:
domain\john.wick;domain\red.eagle;domain\spartak.something....
Here is my function:
CREATE FUNCTION fn_securitypredicate(#Usernames AS nvarchar(4000))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
SELECT 1 as Result
WHERE #Usernames LIKE '%' + SYSTEM_USER + '%'
With this execution time from 2 sec became 50 sec. Any suggestions for improvement.
CREATE SECURITY POLICY [Policy]
ADD FILTER PREDICATE [fn_securitypredicate]([Usernames])
ON [dbo].[Products];
This is the solution I came up with for my previous team.
This requires a a users table, a users permissions table as well as a permission column on your controlled table. It should also have a user group and user group permissions table to scale with users.
users user_permissions controlled_table
+-----------+---------+ +---------+---------------+ +---------------+------+------+
| user_name | user_id | | user_id | permission_id | | permission_id | pk_1 | pk_2 |
+-----------+---------+ +---------+---------------+ +---------------+------+------+
| admin | 1 | | 1 | 0 | | 2 | 1 | 1 |
| user1 | 2 | | 2 | 1 | | 2 | 1 | 2 |
| user2 | 3 | | 2 | 2 | | 3 | 1 | 3 |
| user3 | 4 | | 2 | 3 | | 4 | 2 | 1 |
| | | | 2 | 4 | | 3 | 2 | 2 |
| | | | 3 | 1 | | 1 | 2 | 3 |
| | | | 3 | 2 | | 1 | 3 | 1 |
| | | | 4 | 2 | | 5 | 3 | 2 |
| | | | 4 | 3 | | 4 | 3 | 3 |
| | | | 4 | 4 | | 2 | 4 | 1 |
| | | | | | | 3 | 4 | 2 |
| | | | | | | 3 | 4 | 3 |
+-----------+---------+ +---------+---------------+ +---------------+------+------+
For performance, you will want to add the permission_id to whatever index you were using to search the controlled table. This will allow you to join permissions on the index while searching on the remaining columns. You should view the execution plan for specific details on your indexes.
I want to store some number sequences in my database. So I:
+-----+---------+-----+
| idx | seq_id | x |
+-----+---------+-----+
| 1 | 1 | 1 |
| 2 | 1 | 1 |
| 3 | 1 | 2 |
| 4 | 1 | 3 |
| 5 | 1 | 5 |
| 6 | 1 | 7 |
| 1 | 2 | 1 |
| 2 | 2 | 2 |
| 3 | 2 | 4 |
| 4 | 2 | 8 |
| 5 | 2 | 16 |
| ... |
+-----+---------+-----+
but when I look at it, it feels like I'm storing more overhead with idx and seq_id than meaningful information.
In some sense I am, but I wouldn't find strange if the database engine optimized most of the repetition here. Is this the case for SQLite, MySQL, Postgre...?
And what can I make, perhaps in terms of table definition, to help the db optimize this storage pattern?
I'm having troubles creating a SQL Server trigger to do what I want. I don't have much experience with triggers.
Basically I have a table, let's call it cluster_metadata, with metadata that describes certain attributes about an object. Then I have a second table, let's call it activities_table, with user entered data that may pertain to certain objects in the cluster_metadata table.
The cluster_metadata table is user updatable however new rows are created and deleted using a stored procedure, users can only update specific values.
The activities_table is completely user driven and users can insert/modify and delete rows.
I need a trigger that joins the data and will update the table on any modification of the cluster_metadata or activities_table.
For simplicity I've trimmed down the number of columns but the tables look something like this.
cluster_metadata:
+----------------------------------+
| Cluster | Eligible | Group |
+----------------------------------+
| Cluster1 | True | 1 |
| Cluster2 | True | 1 |
| Cluster3 | True | 2 |
| Cluster4 | False | 2 |
| Cluster5 | True | 3 |
| Cluster6 | True | 4 |
+----------------------------------+
activities_table:
+--------------------------------------------+
| Activity | ID | Group | Start Date |
+--------------------------------------------+
| Patches | 1000 | 1 | 02-01-2015 |
| Patches | 1000 | 2 | 02-10-2015 |
| Patches | 1000 | 3 | 02-20-2015 |
|SomeActivity| 1001 | 2 | 02-30-2015 |
+--------------------------------------------+
The table that I need to create and keep updated would look something like this using the data from the above two tables:
+---------------------------------------------------------------------+
| Cluster | Eligible | Group | Activity | ID | Start Date |
+---------------------------------------------------------------------+
| Cluster1 | True | 1 | Patches | 1000 | 02-01-2015 |
| Cluster2 | True | 1 | Patches | 1000 | 02-01-2015 |
| Cluster3 | True | 2 | Patches | 1000 | 02-10-2015 |
| Cluster3 | True | 2 |SomeActivity| 1001 | 02-30-2015 |
| Cluster4 | True | 2 | Patches | 1000 | 02-10-2015 |
| Cluster4 | True | 2 |SomeActivity| 1001 | 02-30-2015 |
| Cluster5 | True | 3 | Patches | 1000 | 02-20-2015 |
+---------------------------------------------------------------------+
How would I create a trigger that would do this? I would just create a view but there is some user additional input that I need to accept using this merged data.
Thanks!
Thanks for all your help. Basically what I ended up doing is created a joined view with the data from both the cluster_metadata and activities_table. From there I scripted a stored procedure which takes the appropriate data and inserts into a third table. The procedure also makes sure all the data is updated and matches the view on each execution. From there I run the procedure each time a user inputs anything to either of the tables from the web UI. Not the best solution but it's working.
Thanks everyone!