Oracle Apex Data Load - sql

After creating some pages using Apex 5.1's Data Load Wizard, would it be possible via the Page Designer to somewhere hack into the data and add in 2 extra column data. I am attempting to add the following columns (1) Updater (2) Update date/time. I was able to do the same thing with an Interactive Grid page by adding PL/SQL code to the 'Save' Page Processing section but could not do the same for the Data Load Pages.
I have a five attribute table, 2 of which (Updater, Update date/time) are hidden to the APEX app user.

One option is to use a Data Load Transformation Rule. You can add this by editing the Data Load Definition via Shared Components. You specify the column, and you can specify a Rule Type of PLSQL Expression and set the expression to whatever you want, e.g. SYSDATE.
Another option is to add a trigger to the table to set those columns.

Related

Problem appending CSV upload to existing BigQuery table

I've been used to quickly uploading a CSV file to append data to an existing table in BigQuery.
I've made the new table name the same as the existing table, and I've then had options to overwrite or append data to the existing table.
This seems to have changed in the past few days and there is a new BigQuery console UI.
When I try and create a new table from a CSV file upload, under the table name field it currently says:
Unicode letters, marks, numbers, connectors, dashes or spaces allowed.
The job will create the specified destination table if needed, or the
table must be empty if it already exists.
However, when I try and create a table with the same name as an existing table (even though the existing table is empty), I get a red warning saying:
Table already exists
Does anyone know if this feature has now been removed or how to easily append data?
The long way round is to upload a CSV to a new table, then query the new table and set the destination to append or overwrite an existing table. Not ideal, particulalry having to define a new table schema.
In order to append a CSV file to an existing BigQuery table when using the Console, please follow the instructions below:
In the Explorer panel, expand your project and select a dataset.
Expand the Actions option and click Open.
In the details panel, click Create table.
On the Create table page, in the Source section:
For Create table from, select Upload.
Browse file from system
On the Create table page, in the Destination section:
For Dataset name, choose the appropriate dataset.
In the Schema section, for Auto detect, check Schema and input parameters to enable schema auto detection. Alternatively, you can manually enter the schema definition
Click Advanced options.
For Write preference, choose Append to table
Please review this document that expands on the same topic.

SSIS: Excel data source - if column not exists use other column

I am using select statement in excel source to select just specific columns data from excel for import.
But I am wondering, is it possible to select data such way when I select for example column with name: Column_1, but if this column is not exists in excel then it will try to select column with name Column_2? Currently if Column_1 is missing, then data flow task fails.
Use a Script task and write .net code to read the excel file and then perform the check for the Column_1 availability in the file. If the column does not present then use Column_2 as input. Script Task in SSIS can act as a source.
SSIS is metadata based and will not support dynamic metadata, however you can use Script Component as #nitin-raj suggested to handle all known source columns. There is a good post below on how it can be done.
Dynamic File Connections
If you have many such files that can have varying columns then it is better to create a custom component.However, you cannot have dynamic metadata even with custom component, the set of columns should be known upfront to SSIS.
If the list of columns keep changing and you cannot know in advance what are expected columns then you are better off handling the entire thing in C#/VB.Net using Script Task of control flow
As a best practice, because SSIS meta data is static, any data quality and formatting issues in source files should be corrected before ssis data flow task runs.
I have seen this situation before and there is a very simple fix. In the beginning of your ssis package, using a file task to create copy of the source excel file and then run a c# script or execute a powershell to rename the columns so that if column 1 does not exist, it is either added at the appropriate spot in excel file or in case the column name is wrong is it corrected.
As a result of this, you will not need to refresh your ssis meta data every time it fails. This is a standard data standardization practice.
The easiest way is to add two data flow tasks, one data flow for each Excel source select statement and use precedence constraints to execute the second data flow when the first one fails.
The disadvantage of this approach is that if the first data flow task fails for another reason, it will also try to execute the second one. You will need some advanced error handling to check if the error is thrown due to missing columns or not.
But if have a similar situation, I will use a Script Task to check if the column exists and build the SQL command dynamically. Note that this SQL command must always return the same metadata (you must use aliases).
Helpful links
Overview of SSIS Precedence Constraints
Working with Precedence Constraints in SQL Server Integration Services
Precedence Constraints

VBA to send a SQL string to a Tableau Connection

I have an excel work book that allows users to update multiple SQL statements at once based on data entered into cells.
Then the users copy the updated SQL and go into Tableau and paste it into the corresponding custom SQL data source in tableau and refresh it.
Is there a way to send the updated SQL code directly to the corresponding connection in Tableau?
If the table name is changing, that presents a challenge.
I have two suggestions.
Modify the twb xml.
A Tableau workbook file is simply an XML file. One part of the xml contains the connection information, including your custom SQL. VBA has some libraries for manipulating XML. You can write some custom VBA code to modify the XML that contains the custom SQL.
Use VBA to create/alter a view in teradata
Teradata allows database views. With VBA, you can connect to teradata and create/alter a view that changes based on your parameters from Excel. The key is to keep the view name constant, then Tableau will not need to be changed each time the underlying view definition is changed.
Of the two, my first choice would be the second option. It's cleaner and doesn't require distributing a new workbook file each time.

Program to update the database table from the parameter with the excel sheet from select option in ABAP

Will come directly to the question.
Have 2 parameter like filename and table name. The requirement is to upload the data from the excel sheet to the database table enter in the other parameter. This should be in run time. No hardcoding of field names and that program should be flexible enough to suite any table. Please help.
I can think of two possible approaches:
Dynamic code generation -- write a program which writes a program
Use dynamic type tools
For 1. try googling
For 2. see https://wiki.scn.sap.com/wiki/display/Snippets/Example+-+create+a+dynamic+internal+table - this wiki shows a way (not sure if it is overkill as it creates the type from scratch whereas any table in your SAP system is already a defined type in the Data Dictionary).
You can do easily reference a parameterised table in Open SQL e.g. MODIFY (p_tab) ...
Perhaps you could do a generic SPLIT of a line read in from file by the delimiter into a table of fields - you can then use ASSIGN COMPONENT to match the fields you have read in to the fields in your internal type.
If you are doing this I think a white list of allowed tables would be wise - and auth checks. Otherwise someone could upload SAP standard tables with no authorisation.

Programmatically Archive Access Backend Tables

I have an Access application that is now split into front end and back end databases. I would like the user to periodically archive the data from the back end tables to improve the database performance. What I have in mind is the following:
Create a new empty database file
Create empty copies of selected tables from original backend database
Insert data from original backend database to the new database tables with certain criteria, for example date range
Delete from original backend database tables the archived data
Can I achieve this programmatically so the user only needs to perform simple actions like clicking a button and input the data range?
I am using Access 2003 for my application.
Thanks in advance for the help.
Concerning your Boolean field, if it bothers you seeing the -1 and 0, just open the table in design view, select your Boolean field, then in the properties for 'Format', just either double-click to scroll thru the options or click the drop-down arrow to select the desired format.
It will then display as either 'True/False' or 'Yes/No' or 'On/Off'