How do we get the name of the table for an arbitrary struct (that may or may not have a custom naming strategy)?
For example, the name of the Users table, that doesn't have a TableName function.
In v1 I could do tableName := db.NewScope(&User{}).TableName() but that is no longer supported in the current version v2.
How can I get the table name?
I have found a way to do it by running a finaliser:
table_name := db.First(&WantToKnowTheTableForThisStruct{}).Statement.Table
This seems like a bad hack. Running a finaliser seems clunky and unnecessary. I don't want to run a query for items in the table, I just want to know the table name.
Related
When SAS EG creates a query in the query builder it puts "work." in front of tables here is an example:
%_eg_conditional_dropds(WORK.QUERY_FOR_UNL_OBLIGATIONSBEHOLDN);
PROC SQL;
CREATE TABLE WORK.QUERY_FOR_UNL_OBLIGATIONSBEHOLDN AS
SELECT t1.CUSTOM_1,
t1.CUSTOM_2,
/* REPORTING_AMOUNT */
(SUM(t1.REPORTING_AMOUNT)) AS REPORTING_AMOUNT,
t1.LINE_ITEM,
t1.CUSTOM_5
FROM WORK.UNL_OBLIGATIONSBEHOLDNING t1
WHERE t1.CUSTOM_5 IN
(
'VLIK9035_POS_NOTE',
'VLIK9023_POS_COVERED_BOND'
) AND t1.CUSTOM_1 BETWEEN '20500000' AND '20599999' AND t1.LINE_ITEM NOT ='orphans'
GROUP BY t1.CUSTOM_1,
t1.CUSTOM_2,
t1.LINE_ITEM,
t1.CUSTOM_5;
QUIT;
If I remove "WORK." from the created table and the queried table nothing changes it works just as well as before, as far as I know.
What does it mean when a is named WORK.?
Generally, a table is identified by a library name and a table name. A library can consist of several tables. So, the normal form is [library].[table] to identify a table. If you omit the library name SAS interprets this as work.[table] therefore you can remove 'work.' and nothing will change.
Work is a temporary library. So yes, you can remove the work. part of your code.
ApEx as you know stores the multiple choices list of values in a single column by separating the values with ':' like this
qwe:rty:yui:opa:sdf:ghj
but this is not how a database should function, there should should be a middle table with foreign keys.
so my question is has anyone tried to do it the right way or do i just stick with ApEx's method? and is there no risk of errors if so?
i am not very experienced with apex so i still don't know how to insert into 2 tables at the same time, if anyone can tell me how maybe i can find a solution on my own.
The values in a multi-select are treated as colon separated lists in APEX, but you have all the freedom you want in storing your data. There is no built-in support for multi selects (probably because there are many ways to implement this on the backend) but it's not too hard to implement the logic yourself.
Allow me to illustratie with an example. There is a table TEAMS (primary key TEAM_ID) with a child table MEMBERS (primary key MEMBER_ID) and an intersect table TEAM_MEMBERS (primary key TEAM_MEMBER_ID - auto generated).
In the teams form there is a page item P1_TEAM_MEMBERS of type select list with "Allow Multi Selection" set to "on".
There are 2 parts to this:
getting the data from the intersect table into page item on page load
processing the data on submit.
(1) The first part is pretty simple. You create a computation (to run AFTER the Form Initialization Process) on P1_TEAM_MEMBERS of type "SQL Query (return colon separated values)". This type of computation is created specifically for handling multi selects. The source would be
SELECT member_id FROM team_members WHERE team_id = :P1_TEAM_ID
If you want to have more control, you can also take type "SQL Query (return single value)" and us LISTAGG to convert the columns to a colon separated string.
(2) For processing the data you can use an application process to be executed AFTER the Automatic Row Processing process. This is because you need primary key value (P1_TEAM in our case) of the master table if you want to create a new team with members. In my code I use another page item P1_TEAM_MEMBERS_OLD thas has original values of team members. It also has a colon delimited string and it is computed just before this page process.
The plsql API apex_string offers a could of very useful functions. apex_string.split takes a string with separators and converts that to a pl/sql collection.
Use MULTISET to identify the differences in the old and new value.
DECLARE
l_old_team_members apex_t_varchar2;
l_new_team_members apex_t_varchar2;
l_members_added apex_t_varchar2;
l_members_removed apex_t_varchar2;
BEGIN
l_old_team_members := apex_string.split(:P1_MEMBERS_OLD,':');
l_new_team_members := apex_string.split(:P1_MEMBERS,':');
l_members_added := l_new_team MULTISET EXCEPT l_old_team;
l_members_removed := l_old_team MULTISET EXCEPT l_new_team;
-- add new team members
FOR i IN 1 .. l_members_added.COUNT LOOP
INSERT INTO team_members(team_id, member_id)
VALUES (:P1_TEAM_ID,l_members_added(i));
END LOOP;
-- delete removed team members
FOR i IN 1 .. l_members_removed.COUNT LOOP
DELETE FROM team_members WHERE team_id = :P1_TEAM_ID AND member_id = l_members_removed(i);
END LOOP;
END;
The downside to this code is that there is no lost update detection out of the box, but you can implement that manually if it is a requirement.
As far as I can tell, either you use what Apex provides (which is a colon-separated list of values), or you "invent" your own solution.
I don't know why they (Oracle) chose to do it that way, but yes - it is annoying. You can't store those values properly into a table, can't enforce referential integrity constraint, have "problems" when writing reports (as we usually store IDs and display names so we have to convert column into rows; not that you can't do it, just saying), ...
As I don't have (too) many multi-select items, I kind of live with what I have, but I don't like it.
Let's say I have the following 'items' table in my PostgreSQL database:
id
item
value
1
a
10
2
b
20
3
c
30
For some reason I can't control I need to run the following query:
select max(value) from items;
which will return 30 as the result.
At this point, I know that I can find the record that contains that value using simple select statements, etc. That's not the actual problem.
My real questions are:
Does PostgreSQL know (behind the scenes) what's is the ID of that
record, although the query shows only the max value of the column
'value'?
If yes, can I have access to that information and,
therefore, get the ID and other data from the found record?
I'm not allowed to create indexes and sequences, or change way the max value is retrieved. That's a given. I need to work from that point onward and find a solution (which I have, actually, from regular query work).
I'm just guessing that the database knows in which record that information (30) is and that I could have access to it.
I've been searching for an answer for a couple of hours but wasn't able to find anything.
What am I missing? Any ideas?
Note: postgres (PostgreSQL) 12.5 (Ubuntu 12.5-0ubuntu0.20.10.1)
You can simply extract the whole record that contains max(value) w/o bothering about Postgres internals like this:
select id, item, "value"
from items
order by "value" desc
limit 1;
I do not think that using undocumented "behind the scenes" ways is a good idea at all. The planner is smart enough to do exactly what you need w/o extra work.
I want an approach/code snippet to extract column names and the corresponding table name from an oracle query. The queries and consequently the columns and table names change at run time and some of the column names usually are calculated meaning they are wrapped in a function and aliased. I tried different string tokenizing techniques using regexp to separate this out as per the expeted output, but so far, no luck !
Eg:
select mandate_name, investment_sub_team_name,
fn_sum(REG_INV_CMP_AUM) REG_INV_CMP_AUM,
fn_sum(NON_REG_INV_CMP_AUM) NON_REG_INV_CMP_AUM
from DM_SAI_VALUATIONS_STEP3
where position_interval_type = 'E'
and position_type = 'T'
group by mandate_name, investment_sub_team_name;
I want the output for the columns as:
mandate_name
investment_sub_team_name
fn_sum(REG_INV_CMP_AUM)
fn_sum(NON_REG_INV_CMP_AUM)
Note above: I want the columns with the function and not the alias
I want the output for the table name as: DM_SAI_VALUATIONS_STEP3 against all the columns that I listed above
I cannot edit the queries as they are part of an xml output. So, i cannot change the alias. The second point is to just extract the table name from the query. Please consider the fact that nothing can be hard coded like position of the string token etc as the queries containing the columns and the table would be different. I am looking for a generic approach to tokenize them. So, against the column output that I expect, i just need the table name as well..Its always going to only one table in the from clause, so extracting that would not be an issue.
Expected output:
Column Name Table Name
----------- ----------
mandate_name DM_SAI_VALUATIONS_STEP3
investment_sub_team_name DM_SAI_VALUATIONS_STEP3
fn_sum(REG_INV_CMP_AUM) DM_SAI_VALUATIONS_STEP3
fn_sum(NON_REG_INV_CMP_AUM) DM_SAI_VALUATIONS_STEP3
Any help pr pointers would be much appreciated.
You realistically can't solve this problem in general without writing your own SQL compiler (at least the parser and lexer up through the semantic analysis phase). That is a non-trivial exercise particularly if you want to accept any valid Oracle SQL query. Oracle Corporation used to have different SQL parsers for the SQL VM and the PL/SQL VM and couldn't keep them in sync-- it's a major investment of time and effort to keep evolving your parser as the supported SQL grammar improves.
If you're really determined to go down this path, you can start with some of the ANTLR SQL grammars. The O'Reilly Flex and Bison book also has a chapter on parsing SQL that you could potentially use as a starting point. Of course, you'll need to revise and extend the grammars to support whatever SQL features your queries might contain. You'll then need to build the syntax analyzer and semantic analysis portions of the compiler to implement the appropriate scope resolution rules to be able to figure out which table a particular reference to a particular column comes from. Just to reiterate, this is a non-trivial exercise and it's one that has to be enhanced for every new release of the database.
If you can relax your requirements and make some assumptions about what sorts of queries you're going to be seeing, it becomes much easier to write a parser. If you can guarantee that every query references exactly 1 table, identifying which table a particular column comes from is much easier. If you can guarantee that every function call takes at most one column from one table as a parameter, that also makes things easier-- otherwise, you'll need to figure out what you want to return for the table name if the column is the result of a function that takes as arguments columns from multiple tables.
I also agree it is generally not possible. But maybe the solution is to get in touch with the creator of the XML message and agree on a different protocol then a finished up SELECT statement beforehand. Agree with him sending the columns.
If this is not possible and you want to make certain assumptions about how the query is built then you can tokenize after the selectand before from by using the , as a separator. But by all I know you can not really do that by regular expression substring commands. I think you need a bit of PL/SQL function written.
But still take care from keyword could be somewhere part of the columns selecting. What do you do if you suddenly get a query like this:
select
something,
(select count(*) from othertable) as cnt,
andfromthiscolumn xyz
from mytable
So my tip here is to rather sort it out organizationally then trying to code the impossible.
If you know that the structure of your query strings will not change much, you can do something like this:
set serveroutput on
set termout on
clear
declare
v_str varchar2(500) := 'select mandate_name, investment_sub_team_name,
fn_sum(REG_INV_CMP_AUM) REG_INV_CMP_AUM,
fn_sum(NON_REG_INV_CMP_AUM) NON_REG_INV_CMP_AUM
from DM_SAI_VALUATIONS_STEP3
where position_interval_type = ''E''
and position_type = ''T''
group by mandate_name, investment_sub_team_name;';
v_tmp varchar2(500);
v_cols varchar2(500);
v_table varchar2(500);
begin
v_tmp := replace( v_str, 'select ','');
v_tmp := substr( v_tmp, 1, instr(v_tmp, 'where')-1);
dbms_output.put_line('original query: '||v_str);
v_cols := substr (v_tmp, 1, instr(v_tmp, 'from')-1);
dbms_output.put_line('column names: '||v_cols);
v_table := substr(v_tmp, instr(v_tmp, 'from ')+4, 500);
dbms_output.put_line('table name: '||v_table);
end;
In the process of fixing a poorly imported database with issues caused by using the wrong database encoding, or something like that.
Anyways, coming back to my question, in order to fix this issues I'm using a query of this form:
UPDATE table_name SET field_name =
replace(field_name,’search_text’,'replace_text’);
And thus, if the table I'm working on has multiple columns I have to call this query for each of the columns. And also, as there is not only one pair of things to run the find and replace on I have to call the query for each of this pairs as well.
So as you can imagine, I end up running tens of queries just to fix one table.
What I was wondering is if there is a way of either combine multiple find and replaces in one query, like, lets say, look for this set of things, and if found, replace with the corresponding pair from this other set of things.
Or if there would be a way to make a query of the form I've shown above, to run somehow recursively, for each column of a table, regardless of their name or number.
Thank you in advance for your support,
titel
Let's try and tackle each of these separately:
If the set of replacements is the same for every column in every table that you need to do this on (or there are only a couple patterns), consider creating a user-defined function that takes a varchar and returns a varchar that just calls replace(replace(#input,'search1','replace1'),'search2','replace2') nested as appropriate.
To update multiple columns at the same time you should be able to do UPDATE table_name SET field_name1 = replace(field_name1,...), field_name2 = replace(field_name2,...) or something similar.
As for running something like that for every column in every table, I'd think it would be easiest to write some code which fetches a list of columns and generates the queries to execute from that.
I don't know of a way to automatically run a search-and-replace on each column, however the problem of multiple pairs of search and replace terms in a single UPDATE query is easily solved by nesting calls to replace():
UPDATE table_name SET field_name =
replace(
replace(
replace(
field_name,
'foo',
'bar'
),
'see',
'what',
),
'I',
'mean?'
)
If you have multiple replaces of different text in the same field, I recommend that you create a table with the current values and what you want them replaced with. (Could be a temp table of some kind if this is a one-time deal; if not, make it a permanent table.) Then join to that table and do the update.
Something like:
update t1
set field1 = t2.newvalue
from table1 t1
join mycrossreferncetable t2 on t1.field1 = t2.oldvalue
Sorry didn't notice this is MySQL, the code is what I would use in SQL Server, my SQL syntax may be different but the technique would be similar.
I wrote a stored procedure that does this. I use this on a per database level, although it would be easy to abstract it to operate globally across a server.
I would just paste this inline, but it would seem that I'm too dense to figure out how to use the markdown deal, so the code is here:
http://www.anovasolutions.com/content/mysql-search-and-replace-stored-procedure