I'm using Informix to create a table that is to act like a flag for other apps and tables. The table I created has these columns: a unique sequential number, table_name, app, value (Y/N), check_name (what is being verified). So for example if the value is checked 'Y' for table enrollment and app error_check.
Then I want to run that script associated with that app. The idea is to be able to turn on and off certain scripts to run, so we can easily find errors and let people know about it. I currently have to put a conditional in the actual application SQL script to match against the table I created and that seems redundant and I don't want to add the conditional to all the code we are going to use. I was thinking maybe I could have the script create some sort of temp table when comparing the conditional table to the table I want to check against. Then maybe I create an app with just some basic drop down boxes to check Y/N values and what I want to check.
So I guess my question will be: am I looking at this all wrong? And what kind of way should I try to make a conditional.
Example of conditional I'm using in SQL
WHERE 'Y' = (SELECT value
FROM err_table
WHERE app = 'progerr' AND check_name = 'CLASS_VALIDATION');
Related
I have a database. I created it with HeidiSQL. Its look like this.
I enter the value-1 and value-2.
Is there a way to enter a formula to Result column like " =Value-1 * Value-2 " ? I want my database to calculate the Result when I enter my values to other cells.
A trigger is one way to achieve automated column content.
A second one is a view, which you can create additionally to the table. That view could contain SQL which generates the result:
SELECT value1, value2, value1*value2 AS result
A third (more modern) alternative is adding a virtual column in your existing table. You can do that with HeidiSQL's table editor, like shown in the screenshot. Just add a new column with INT data type, and set its Virtuality to "VIRTUAL", and Expression to "value-1 * value-2". That's it.
I'm not familiar with HeidiSQL, but it appears to be a front end? What RDBMS are you using, for example SQL Server allows a computed column.
SQL
ALTER TABLE YourTable
ADD Result AS ([Value-1] * [Value-2])
Right click your database name in the folder structure, go to --> create new then -->Trigger
Then you can create a trigger that when entering data, will be activated on the entire column like this:
But you will need to know how to write the actual query and function. This requires basic knowledge that is generally generic and consistent of most all SQL languages.
I accidentally added a wrong column to my BigQuery table schema.
Instead of reloading the complete table (million of rows), I would like to know if the following is possible:
remove bad rows (rows with values contains the wrong column) by running a "select *" query on the table with some kind of filter, and saving result to same table.
removing the (now) unused column.
Is this functionality (or similar) supported?
Possibly the "save result to table" functionality can have a "compact schema" option.
The smallest time-saving way to remove a column from Big Query according to the documentation.
ALTER TABLE [table_name] DROP COLUMN IF EXISTS [column_name]
If your table does not consist of record/repeated type fields - your simple option is:
Select valid columns while filtering out bad records into new temp table
SELECT < list of original columns >
FROM YourTable
WHERE < filter to remove bad entries here >
Write above to temp table - YourTable_Temp
Make a backup copy of "broken" table - YourTable_Backup
Delete YourTable
Copy YourTable_Temp to YourTable
Check if all looks as expected and if so - get rid of temp and backup tables
Please note: the cost of above #1 is exactly the same as action in first bullet in your question. The rest of actions (copy) are free
In case if you have repeated/record fields - you still can execute above plan, but in #1 you will need to use some BigQuery User-Defined Functions to have proper schema in output
You can see below for examples - of course this will require some extra dev - but if you are in critical situation - this should work for you
Create a table with Record type column
create a table with a column type RECORD
I hope, at some point Google BigQuery Team will add better support for cases like yours when you need to manipulate and output repeated/record data, but for now this is a best workaround I found - at least for myself
Below is the code to do it. Lets say c is the column that you wants to delete.
CREATE OR REPLACE TABLE transactions.test_table AS
SELECT * EXCEPT (c) FROM transactions.test_table;
Or second method and my favorite is by following below steps.
Write Select query with the columns you want to exclude.
Go to Query Settings
Query Settings
In Destination setting Set destination table for query results, enter project name, Dataset name and table name exactly same as you entered in Step 1.
In Destination table write preference select Overwrite table.
Destination table settings
Save the Query Setting and run the query.
Save results to table is your way to go. Try on the big table with the selected columns you are interested, and you can apply a limit to make it small.
I am creating an upate trigger. I have a situation where I need to test a condition on a table column, without actually knowing what the exact column name is. The trigger is generic and can applied to any table, with varying columns.
Pseudo-code:
// define a cursor that loops through all columns in "MyTable"
Define cursor C1 for (select COLS from SYSCAT.TABLES where TABS="MyTable")
FOR
// take the next column from the cursor
#temp_var = C1.COLS
// DELETED and INSERTED are tables that also contain the same columns as "MyTable" table.
if(DELETED.#temp_var <> INSERTED.#temp_var)
THEN
...
The above statement if(DELETED.#temp_var <> ... does of course not work, but maybe you can see what I am trying to do? So I would want it to be during runtime e.g. if(DELETED.MyColumn <>... where "MyColumn"is a column in"MyTable"and also inINSERTEDandDELETED columns. Note that because this method should be generic, I do not know beforehand what columns the table has (depends on the specific table in use).
Any ideas on how to build the if-statement dynamically like that?
In DB2 SQL you cannot refer to columns dynamically. So, you won't be able to do that using only SQL. You could possibly call an external procedure written in another language from within the trigger. Or, you could rethink your overall design for what you are trying to do. I don't see any other options.
I was wondering if there was a way to search an entire SQLite database for one specific word. I do not know the column that it is in or even the table that it is in.
The table and row/column that contains this specific word also contains the other entries that i need to edit.
In-short:
Need to find a specific word
Can't query (i don't think i can atleast) since i don't know the table or column name that its located in.
I need to know where this specific word is referenced. In what table and row so I can access the others that are along side it.
Basically, is there a CTRL+F functionality of SQlite that searches the entirety of the SQLite file?
I have mac/windows/linux machines. I am not limited by software if that is a solution.
Any such functionality would essentially be running queries that check every column of every table. You can do that via a script that runs the following SQL:
1) Get a list of all the tables:
select name from sqlite_master where type = 'table'
2) For each table, get all of its columns (column name is available in the name field)
pragma table_info(cows)
3) Then for each table, generate a query that checks every field and run it:
select
*
from cows
where name like '%Daisy%'
or owner like '%Daisy%'
or farm like '%Daisy%'
Hi I have a table which was designed by a lazy developer who did not created it in 3rd normal form. He saved the arrays in the table instead of using MM relation . And the application is running so I can not change the database schema.
I need to query the table like this:
SELECT * FROM myTable
WHERE usergroup = 20
where usergroup field contains data like this : 17,19,20 or it could be also only 20 or only 19.
I could search with like:
SELECT * FROM myTable
WHERE usergroup LIKE 20
but in this case it would match also for field which contain 200 e.g.
Anybody any idea?
thanx
Fix the bad database design.
A short-term fix is to add a related table for the correct structure. Add a trigger to parse the info in the old field to the related table on insert and update. Then write a script to [parse out existing data. Now you can porperly query but you haven't broken any of the old code. THen you can search for the old code and fix. Once you have done that then just change how code is inserted or udated inthe orginal table to add the new table and drop the old column.
Write a table-valued user-defined function (UDF in SQL Server, I am sure it will have a different name in other RDBMS) to parse the values of the column containing the list which is stored as a string. For each item in the comma-delimited list, your function should return a row in the table result. When you are using a query like this, query against the results returned from the UDF.
Write a function to convert a comma delimited list to a table. Should be pretty simple. Then you can use IN().