What is location of built in SQL functions and Oracle Packages in Oracle Database - sql

I want to know location of file / table where definition of Built In Functions / Packages / Procedures of Oracle are stored like MAX(), DBMS_OUTPUT etc.

In the PL/SQL engine, the Oracle supplied functions such as MAX() are part of the package STANDARD in the SYS schema.
Most other supplied packages reside in the SYS schema, however you can find out where any individual package is located quite easily - for example:
SELECT *
FROM all_objects
WHERE object_name = 'DBMS_OUTPUT'
Results:
| OWNER | OBJECT_NAME | SUBOBJECT_NAME | OBJECT_ID | DATA_OBJECT_ID | OBJECT_TYPE | CREATED | LAST_DDL_TIME | TIMESTAMP | STATUS | TEMPORARY | GENERATED | SECONDARY | NAMESPACE | EDITION_NAME |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| SYS | DBMS_OUTPUT | (null) | 4972 | (null) | PACKAGE | August, 27 2011 08:22:22+0000 | August, 27 2011 08:22:22+0000 | 2011-08-27:08:22:22 | VALID | N | N | N | 1 | (null) |
| PUBLIC | DBMS_OUTPUT | (null) | 4973 | (null) | SYNONYM | August, 27 2011 08:22:22+0000 | August, 27 2011 08:22:22+0000 | 2011-08-27:08:22:22 | VALID | N | N | N | 1 | (null) |
The following documentation page lists off most (if not all) PL/SQL supplied packages:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/intro.htm#BABGEDBH

The scripts to create the build-in functions, packages and procedures are stored on the database server machine. You have to find the value of the environment variable $ORACLE_HOME, and then go to $ORACLE_HOME/rdbms/admin/. Just use grep to find the file you're looking for.
If the database server is a Windows machine, look at ECHO %ORACLE_HOME% at the command prompt and proceed from there.

Related

Audit data migration into Oracle

I am having a task to migrate data from another database to Oracle database.
And data from previous database has audit information, i.e. tracking of create/update of records with update_time and update_user. For simplicity, let's assume the previous database I am talking about is an excel file of the following format:
Key | Value | Update_Time | Update_User |
----|-------|-------------|-------------|
a | 1 | 23/04/2020 | user1 |
b | 2 | 21/04/2020 | user2 |
a | 3 | 20/04/2020 | user1 |
a | 4 | 19/04/2020 | user5 |
a | 5 | 18/04/2020 | user2 |
What is the best practice to move data into Oracle such that user can still query those audit info along with the new audit given that the data is now being saved to a new table in Oracle below? Does Oracle provide any native solution for this? I try Oracle Flashback, but not sure how to include those previous audit, because as I understand, we can only query Flashback for data change from now on. Ideally, I want to store only the latest data table in Oracle like this, as they are the actual active data:
Key | Value | Last_Update_Time | Last_Update_User |
----|-------|------------------|------------------|
a | 1 | 23/04/2020 | user1 |
b | 2 | 21/04/2020 | user2 |
Let's say user continue edit row with key b on 24/04/2020, then I want to fetch those result for UI display (currently I am using python sqlalchemy to access the db, but a solution with a sql query should be fine for the start)
Key | Value | Update_Time | Update_User |
----|-------|-------------|-------------|
b | 7 | 24/04/2020 | user2 | ---> this is an update on the new oracle table above
a | 1 | 23/04/2020 | user1 | ---> those rows below I want to somehow load into the oracle without explicitly create a new table for it
b | 2 | 21/04/2020 | user2 |
a | 3 | 20/04/2020 | user1 |
a | 4 | 19/04/2020 | user5 |
a | 5 | 18/04/2020 | user2 |
After the change, the main data table in Oracle should look below
Key | Value | Last_Update_Time | Last_Update_User |
----|-------|------------------|------------------|
a | 1 | 23/04/2020 | user1 |
b | 7 | 24/04/2020 | user2 |
YOu can use the below select query
SELECT AD.* FROM Audit_table AD,
(SELECT Key,Max(Update_time) Updated_Time,Last_updated_USer
From Audit_table
group by Key,Last_updated_USer)rec
where AD.Key=rec.Key
AND AD.Updated_Time=rec.Updated_Time
AND AD.Last_updated_USer=rec.Last_updated_USer;

SQL Server: Listing Differences Between Tables

There are a few similar threads over the years regarding this but I haven't found or been able to do what I'm seeking.
I currently have two tables that have the same schema that I produce through a script. For name's sake, one is "results_prior" and the other is "results_current". This new query will ideally run once a month and find if there were any differences. Example:
TABLE: results_prior
----------------------------------------
| ID | ENVIRONMENT | EDITION | CONTACT |
----------------------------------------
| 03 | Development | 2008 | Bob |
----------------------------------------
| 05 | Production | 2012 | Phil |
----------------------------------------
| 09 | Development | 2008 | Erik |
----------------------------------------
| 13 | Production | 2012 | Ashley |
----------------------------------------
| 22 | Production | 2012 | Erik |
----------------------------------------
TABLE: results_current
----------------------------------------
| ID | ENVIRONMENT | EDITION | CONTACT |
----------------------------------------
| 03 | Development | 2008 | Bob |
----------------------------------------
| 05 | Production | 2012 | Phil |
----------------------------------------
| 22 | Production | 2012 | Erik |
----------------------------------------
When the two are compared, the result should be:
----------------------------------------
| ID | ENVIRONMENT | EDITION | CONTACT |
----------------------------------------
| 09 | Development | 2008 | Erik |
----------------------------------------
| 13 | Production | 2012 | Ashley |
----------------------------------------
Since 09 and 13 were in results_current. Likewise, and this is perhaps the tricky part, is having the same be done if results_current has more results than results_prior. So, vice-versa.
Sorry I have no example code to go off of. I have been goofing with UNION, JOINs, and EXCEPTs the last few hours and I feel like my logic (again, in SQL Server) is just not making any sense. Any assistance would be appreciated.
This can be treated as the result set of A-B union B-A. I assume all the columns are the same in both the tables.Hence the * in select.
(select * from results_prior
except
select * from results_current)
union all
(select * from results_current
except
select * from results_prior)
I would try something roughly similar to this:
SELECT ID, ENVIRONMENT, EDITION, CONTACT FROM RESULTS_PRIOR
WHERE ID NOT IN (SELECT ID FROM RESULTS_CURRENT)
UNION
SELECT ID, ENVIRONMENT, EDITION, CONTACT FROM RESULTS_CURRENT
WHERE ID NOT IN (SELECT ID FROM RESULTS_PRIOR)

Last accessed timestamp of a Netezza table?

Does anyone know of a query that gives me details on the last time a Netezza table was accessed for any of the operations (select, insert or update) ?
Depending on your setup you may want to try the following query:
select *
from _v_qryhist
where lower(qh_sql) like '%tablename %'
There are a collection of history views in Netezza that should provide the information you require.
Netezza does not track this information in the catalog, so you will typically have to mine that from the query history database, if one is configured.
Modern Netezza query history information is typically stored in a dedicated database. Depending on permissions, you may be able to see if history collection is enabled, and which database it is using with the following command. Apologies in advance for the screen-breaking wrap to come.
SYSTEM.ADMIN(ADMIN)=> show history configuration;
CONFIG_NAME | CONFIG_DBNAME | CONFIG_DBTYPE | CONFIG_TARGETTYPE | CONFIG_LEVEL | CONFIG_HOSTNAME | CONFIG_USER | CONFIG_PASSWORD | CONFIG_LOADINTERVAL | CONFIG_LOADMINTHRESHOLD | CONFIG_LOADMAXTHRESHOLD | CONFIG_DISKFULLTHRESHOLD | CONFIG_STORAGELIMIT | CONFIG_LOADRETRY | CONFIG_ENABLEHIST | CONFIG_ENABLESYSTEM | CONFIG_NEXT | CONFIG_CURRENT | CONFIG_VERSION | CONFIG_COLLECTFILTER | CONFIG_KEYSTORE_ID | CONFIG_KEY_ID | KEYSTORE_NAME | KEY_ALIAS | CONFIG_SCHEMANAME | CONFIG_NAME_DELIMITED | CONFIG_DBNAME_DELIMITED | CONFIG_USER_DELIMITED | CONFIG_SCHEMANAME_DELIMITED
-------------+---------------+---------------+-------------------+--------------+-----------------+-------------+---------------------------------------+---------------------+-------------------------+-------------------------+--------------------------+---------------------+------------------+-------------------+---------------------+-------------+----------------+----------------+----------------------+--------------------+---------------+---------------+-----------+-------------------+-----------------------+-------------------------+-----------------------+-----------------------------
ALL_HIST_V3 | NEWHISTDB | 1 | 1 | 20 | localhost | HISTUSER | aFkqABhjApzE$flT/vZ7hU0vAflmU2MmPNQ== | 5 | 4 | 20 | 0 | 250 | 1 | f | f | f | t | 3 | 1 | 0 | 0 | | | HISTUSER | f | f | f | f
(1 row)
Also make note of the CONFIG_VERSION, as it will come into play when crafting the following query example. In my case, I happen to be using the version 3 format of the query history database.
Assuming history collection is configured, and that you have access to the history database, you can get the information you're looking for from the tables and views in that database. These are documented here. The following is an example, which reports when the given table was the target of a successful insert, update, or delete by referencing the "usage" column. Here I use one of the history table helper functions to unpack that column.
SELECT FORMAT_TABLE_ACCESS(usage),
hq.submittime
FROM "$v_hist_queries" hq
INNER JOIN "$hist_table_access_3" hta
USING (NPSID, NPSINSTANCEID, OPID, SESSIONID)
WHERE hq.dbname = 'PROD'
AND hta.schemaname = 'ADMIN'
AND hta.tablename = 'TEST_1'
AND hq.SUBMITTIME > '01-01-2015'
AND hq.SUBMITTIME <= '08-06-2015'
AND
(
instr(FORMAT_TABLE_ACCESS(usage),'ins') > 0
OR instr(FORMAT_TABLE_ACCESS(usage),'upd') > 0
OR instr(FORMAT_TABLE_ACCESS(usage),'del') > 0
)
AND status=0;
FORMAT_TABLE_ACCESS | SUBMITTIME
---------------------+----------------------------
ins | 2015-06-16 18:32:25.728042
ins | 2015-06-16 17:46:14.337105
ins | 2015-06-16 17:47:14.430995
(3 rows)
You will need to change the digit at the end of the $v_hist_table_access_3 view to match your query history version.

Query to compare values across different tables?

I have a pair of models in my Rails app that I'm having trouble bridging.
These are the tables I'm working with:
states
+----+--------+------------+
| id | fips | name |
+----+--------+------------+
| 1 | 06 | California |
| 2 | 36 | New York |
| 3 | 48 | Texas |
| 4 | 12 | Florida |
| 5 | 17 | Illinois |
| … | … | … |
+----+--------+------------+
places
+----+--------+
| id | place |
+----+--------+
| 1 | Fl |
| 2 | Calif. |
| 3 | Texas |
| … | … |
+----+--------+
Not all places are represented in the states model, but I'm trying to perform a query where I can compare a place's place value against all state names, find the closest match, and return the corresponding fips.
So if my input is Calif., I want my output to be 06
I'm still very new to writing SQL queries, so if there's a way to do this using Ruby within my Rails (4.1.5) app, that would be ideal.
My other plan of attack was to add a fips column to the "places" table, and write something that would run the above comparison and then populate fips so my app doesn't have to run this query every the page loads. But I'm very much a beginner, so that sounds... ambitious.
This is not an easy query in SQL. Your best bet is one of the fuzzing string matching routines, which are documented here.
For instance, soundex() or levenshtein() may be sufficient for what you want. Here is an example:
select distinct on (p.place) p.place, s.name, s.fips, levenshtein(p.place, s.name) as dist
from places p cross join
states s
order by p.place, dist asc;

Is there a termination character for bq in interactive mode? How do I set it?

I've just started using bigquery and I'm used to writing SQL across multiple lines. However, if I run
bq shell
to get into interactive mode, I can't put a query in that runs across multiple lines without bq reporting an error, as it evaluates the first line of the instruction and then complains there's no FROM or GROUP BY clauses.
In other database clients, I can set a termination character: eg in DB2,
db2 -t
allows me to run db2 with commands terminated with ;
Is there a way to run bq with a termination character for each statement? I've looked at https://developers.google.com/bigquery/bq-command-line-tool and although it refers to global flags, I don't see a reference to termination characters.
After delving in the source code for bq, I can confirm there's not such termination character that allows you to do multi-line queries.
It's a consequence of the cmd module on which bq shell is built upon.
As an alternative you could run queries directly from your shell with bq query YOUR QUERY as the shell allows multi-line commands when enclosed in double quotes (").
Example:
bq query "SELECT station_number, year, month, day
FROM [publicdata:samples.gsod]
LIMIT 10"
+----------------+------+-------+-----+
| station_number | year | month | day |
+----------------+------+-------+-----+
| 42420 | 2007 | 5 | 20 |
| 42080 | 2007 | 5 | 5 |
| 152990 | 1990 | 3 | 26 |
| 543110 | 1976 | 10 | 24 |
| 740430 | 1966 | 11 | 30 |
| 228540 | 1949 | 9 | 23 |
| 747809 | 2009 | 7 | 17 |
| 681120 | 1997 | 2 | 15 |
| 26070 | 2008 | 12 | 27 |
| 128430 | 1988 | 9 | 22 |
+----------------+------+-------+-----+