ORACLE APEX - How can I insert form data into multiple tables, in my case- one table is storing text data while other table is for BLOB? - multi-table

I have a form to target 2 tables - one for text data, 2nd for documents (BLOB)
for stroing text data eg. Name, address details etc. - I am using process which executes PL/SQL procedure on passed page item values (in procedure i am calling next val from seq and inserting)
need help if I can use different process on same form item (blob type, file browse) for inserting blob into respective table.
or if i can use same process, how would i pass blob to procedure?
I am able to insert BLOB, if i use table based on blob table only by using Form - Automatic Row Processing (DML) option (identification section). But for other form elements I am using "Sql code" option.

Related

Azure Data Factory V2: How to pass a file name to stored procedure variable

I have a big fact Azure SQL table with the following structure:
Company Revenue
-------------------
A 100
B 200
C 100
. .
. .
. .
I am now building a stored procedure on Azure Data Factory V2 that will delete all records of a special company from the Azure SQL fact table above in a monthly basis. For this exercise this special company shall be identified by the variable #company. The structure of the stored procedure was created as:
#company NVARCHAR(5)
DELETE FROM table
WHERE [company] = #company
As I will have different Excel files from each company that will be inserting data into this table in a monthly basis (with Copy Activity), I want to use the stored procedure above to delete the old data from that company before I add the most updated one.
I would like then to pass to the variable "#company" the name of that Excel file (stored in a blob container) so that the stored procedure knows what is the relevant data to be deleted from the fact table. For example: If the Excel file is "A", the stored procedure shall be "delete from table where company = A".
Any ideas on how to pass the Excel file names to the variable "#company" and set this up on Azure Data Factory V2?
Any ideas on how to pass the Excel file names to the variable
"#company" and set this up on Azure Data Factory V2?
Based on your description, I found Event-based trigger in azure data factory maybe will meet your needs. Event-based trigger runs pipelines in response to an event, such as the arrival of a file, or the deletion of a file, in Azure Blob Storage.
So, when the new excel file created in the blob storage (BTW, it only supports V2 storage account,more details please refer to this article), you could get the #triggerBody().folderPath and #triggerBody().fileName. To use the values of these properties in a pipeline, you must map the properties to pipeline parameters. After mapping the properties to parameters, you can access the values captured by the trigger through the #pipeline.parameters.parameterName expression throughout the pipeline. (doc)
You could get the filename and pass it into your stored procedure. Then do the deletion and copy activities.
I you use the stored procedure activity you can fill the parameters from pipeline parameters: https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-stored-procedure
You can click on "Import" to automatically add the parameters after selecting a stored procedure or add them by hand. Then you can use a pipeline expression to dynamically fill them, e.g. as suggested by Jay Gong using the trigger parameters #triggerBody().folderPath and #triggerBody().fileName.
Alternatively, instead of using a stored procedure, you could add a pre-copy script to your copy activity:
This shows up only for appropriate sinks, such as a database table. You can fill this script dynamically as well. In your case, this could look like:
#{concat('DELETE FROM table
WHERE [company] = ''',triggerBody().fileName,'''')}
It might also make sense, to add a parameter to the pipeline containing the filename, and setting it to #triggerBody().fileName or any more complex expression, in case you are using it multiple times.

How can I create a Primary Key in Oracle when composite key and a "generated always as identity" option won't work?

I'm working on an SSIS project that pulls data form Excel and loads to Oracle Database every month. I plan to pull data from Excel file and load to Oracle stage table. I will be using a merge statement because the data that gets loaded each month is a rolling 12 month list and the data can change, so need to be able to INSERT when records don't match or UPDATE when they do. My control flow looks like this: Truncate Stage Table (to clear out table from last package run)---> DATA FLOW from Excel to Stage Table---> Merge to Target Table in Oracle.
My problem is that the data in the source Excel file doesn't have any unique columns to select a primary key or a composite key, as it is a possibility (although very unlikely) that a new record could have the exact same information. I am unable to utilize the "generated always as identity" because my SSIS package needs to truncate at the beginning of each job to clear out the Stage Table. This would generate the same ID numbers in the new load and create problems in the Target Table.
Any suggestions as to how I can get around this problem?
Welcome to SO and ETL. Instead of using a staging table, in SSIS use two sources: Excel file and existing production table. Sort both inputs and then perform a merge join on the unique identifier. From there, use a derived column transformation to add a new column called 'Action' which will mark a row as either an INSERT/UPDATE/DELETE based on whether the join key is NULL. So:
NULL from file means DELETE (not in file, in database)
NULL from database means INSERT (in file, not in database)
Not NULL for both means UPDATE (in file, in database)
From there, use a conditional split to split rows to either a OLE DB Destination (INSERT), or SQL Command (UPDATE or DELETE). You can now remove the stage environment and MERGE command from your process. This has the added benefit of removing the ETL load from the SQL Server, assuming SSIS is running on a separate server.
Note: The sort transformation has the option to remove duplicates.

Using SSIS Package, How to validate the source records for duplicate before inserting?

SQL Server 2012: using a SSIS package, how to validate the source records for duplicate before inserting?
Our source file is a .csv. We are facing duplicate records loaded in the staging table.
At present , we are following manual process of loading data.
How to validate the source file data against the destination table before loading and load only the valid records? Possibility of loading duplicate records not only because of the source file having duplicate records in it but also reloading the same file to the staging table.
We are not Truncate the staging table. We are keeping records as is.
Second question : How to pick the name of the source file and pass it in the loading ? Possibly having a derived column as "FileName" which will get loaded along with raw data to the staging table.
The typical load pattern I use in this case is:
Prepare a staging table that matches the source file
In SSIS run a SQL Task with TRUNCATE StagingTable; (which clears it out)
Then, run a data flow task that loads the entire data file into the staging table
Lastly, merge the staging table into the final table.
I prefer to do this last step in a SQL Task also:
INSERT INTO FinalTable
(PrimaryKey,Column1,Column2,Column3)
SELECT
PrimaryKey,Column1,Column2,Column3
FROM StagingTable SRC
WHERE NOT EXISTS (
SELECT * FROM FinalTable TGT WHERE TGT.PrimaryKey=SRC.PrimaryKey
);
If you prefer a graphical UI, and you don't mind the extra network traffic, and slower processing time, you can do the same type of merge operation using lookups. You can even use the SCD component but I strongly discourage it's use.
Whether you do it in T-SQL or the UI, you need a key that can be used to uniquely identify the records (referred to as PrimaryKey in my example). If you don't have this key, there is no way to 'deduplicate'
Note in this example you have a 'real' staging table whose only purpose is to get the data file into the database. Then you have a final table that contains the final consistent result
Also note that this pattern only adds new rows - it will not update existing rows if they change in the data file.
Given your exact scenario (of loading the same file again), I would first check if the data is even loaded to the staging table. If you do that, you don't have to worry about checking the duplicates at record level.
How are you setting the connection to the file? Most of the data loads I have dealt with, I designed for-each-loop-container where the file name/path would be populated in a user variable. As you said, you could just use a derived column transform to add a new column which gets the value from a variable. If you don't have the file name in a user variable, you could use expression task in the control flow to populate it.
To cover your exact requirement, I would use the above step to populate the file name in the table. You could even normalize to a different table instead of storing long file name for every data record. Once you have all the file names in the database, you could just have an "Execute SQL" at the beginning to see if that file name is already in the database.
Two years back I have faced the same problem with importing TSV files.
I tried many other solutions but best I could design is C# code script for such validation at its best.
What I did as a solution
Create one C# DataTable object in memory with Primary Key constraints,
like:-
DataColumn[] keyColumn = new DataColumn[30];
keyColumn[intJ] = dtFilterdPK.Columns["Column name"];
Then try to add one by one row from your CSV to this DataTables.
Whenever your data will get Duplication based on Primary Key will have an error
Handle this error code in (TRY)..CATCH block and make this duplication error as per your logging requirement.
Avoid those error records importing in DataTable object.
Atlast import your CSV file into your table as BulkImport
Like:
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(myConnection))
{
bulkCopy.DestinationTableName = "Your DB Table Name"; //Assign table name
bulkCopy.WriteToServer(dtToBeImport); //Write into Actual table.
}
Hope this will help you.

How to Retrieve BLOB Data In Oracle APEX

I want to retrieve BLOB data from database table to Display Image Item called P4_COMPANY_LOGO
I have selected the source for Display image item as SQL Query (Return Single Value):
select company_logo from companies where company_id=10;
But when I run the page it immediately return this Error:
P4_COMPANY_LOGO has to have a valid BLOB column as source.
Help Please!..
Maybe you don't have the item properties quite right. This would work:

Reading the value from text file and updating it to a field in sql table

I have atext file with data like
Patient name: Patient1 Medical rec #: A1Admit date: 04/26/2009 Discharge date: 04/26/2009
DRG: 982 and so on.
In the format as given above i am having several records in text file.each field is separated by colon
I have to read this file and find out values and update corresponding fields in my sql table.Say drg value 982 has to be updated in drg column of sql table)
Please help in doing it through sql query or ssis package.
If I get this task I'll use SSIS.
Create 2 DataSources: flat file (for text file) and SQL Server connection
Use Lookup task to lookup value from text file for each record in the db table
Use execute SQL Task to update records by lookuped value
You MIGHT try doing this by means of BULK INSERT.
Create a temp-table to get hold the new values
BULK INSERT the file into said table (**)
[optionally do some data-enrichment/cleaning here]
merge the information from the temp-table into the actual table
The only problem with this MIGHT be that
the server cannot access the file directly (eg. when the file is on a
network share)
the file is of a format that can't be handled by BULK INSERT
Given the example data above you might need to load the data into one big column and then do the splitting into different columns by means of creative-sql (PatIndex, substring, the works...). You might try giving colon as a field-separator, but you'll still end up with data that needs (quite a bit) of cleaning.