Program to update the database table from the parameter with the excel sheet from select option in ABAP - abap

Will come directly to the question.
Have 2 parameter like filename and table name. The requirement is to upload the data from the excel sheet to the database table enter in the other parameter. This should be in run time. No hardcoding of field names and that program should be flexible enough to suite any table. Please help.

I can think of two possible approaches:
Dynamic code generation -- write a program which writes a program
Use dynamic type tools
For 1. try googling
For 2. see https://wiki.scn.sap.com/wiki/display/Snippets/Example+-+create+a+dynamic+internal+table - this wiki shows a way (not sure if it is overkill as it creates the type from scratch whereas any table in your SAP system is already a defined type in the Data Dictionary).
You can do easily reference a parameterised table in Open SQL e.g. MODIFY (p_tab) ...
Perhaps you could do a generic SPLIT of a line read in from file by the delimiter into a table of fields - you can then use ASSIGN COMPONENT to match the fields you have read in to the fields in your internal type.
If you are doing this I think a white list of allowed tables would be wise - and auth checks. Otherwise someone could upload SAP standard tables with no authorisation.

Related

SSIS: Excel data source - if column not exists use other column

I am using select statement in excel source to select just specific columns data from excel for import.
But I am wondering, is it possible to select data such way when I select for example column with name: Column_1, but if this column is not exists in excel then it will try to select column with name Column_2? Currently if Column_1 is missing, then data flow task fails.
Use a Script task and write .net code to read the excel file and then perform the check for the Column_1 availability in the file. If the column does not present then use Column_2 as input. Script Task in SSIS can act as a source.
SSIS is metadata based and will not support dynamic metadata, however you can use Script Component as #nitin-raj suggested to handle all known source columns. There is a good post below on how it can be done.
Dynamic File Connections
If you have many such files that can have varying columns then it is better to create a custom component.However, you cannot have dynamic metadata even with custom component, the set of columns should be known upfront to SSIS.
If the list of columns keep changing and you cannot know in advance what are expected columns then you are better off handling the entire thing in C#/VB.Net using Script Task of control flow
As a best practice, because SSIS meta data is static, any data quality and formatting issues in source files should be corrected before ssis data flow task runs.
I have seen this situation before and there is a very simple fix. In the beginning of your ssis package, using a file task to create copy of the source excel file and then run a c# script or execute a powershell to rename the columns so that if column 1 does not exist, it is either added at the appropriate spot in excel file or in case the column name is wrong is it corrected.
As a result of this, you will not need to refresh your ssis meta data every time it fails. This is a standard data standardization practice.
The easiest way is to add two data flow tasks, one data flow for each Excel source select statement and use precedence constraints to execute the second data flow when the first one fails.
The disadvantage of this approach is that if the first data flow task fails for another reason, it will also try to execute the second one. You will need some advanced error handling to check if the error is thrown due to missing columns or not.
But if have a similar situation, I will use a Script Task to check if the column exists and build the SQL command dynamically. Note that this SQL command must always return the same metadata (you must use aliases).
Helpful links
Overview of SSIS Precedence Constraints
Working with Precedence Constraints in SQL Server Integration Services
Precedence Constraints

Padding ssis input source columns to avoid truncation errors?

First post. In SSIS I am using an ODBC Source, and the database (or ODBC driver) doesn't appear to report column metadata correctly for any of the tables in the database for varchar type columns. Therefore, each time I import a table, I get truncation errors on all the varchar fields. Is there any way to set the size of these fields besides doing it ONE AT A TIME in the advanced editor? When importing a flat file source it lets you select a padding % for string fields. Does something like this exist for OLE or ODBC sources? If not, is there any way I can override the column length to, say, force them all to be VARCHAR(1000)?
I have never experience SQL Server providing the wrong meta data for an ODBC connection and it is unlikely you have a ghost in the machine (Deus Ex Machina). The meta data of the column can be set in the ODBC source via the advanced editor. I am willing to bet that is where the difference is. To confirm this:
Right click the ODBC connection and select the Advanced Editor
Click on the Input/Ouput Properties tab
Expand OLE DB Source Output
Expand both External Columns and Output Columns
Inspect each column pair and verify that the meta data matches
Correct any outages in the meta data
Let me know if that works. If it does not work, please provide data and SQL query you are using.
The VARCHAR field width must be set to the maximum incoming field width. I know the default field width is 50. Regardless, each field must be set. I previously worked on a project with large numbers of columns on the input files. My solution was to store the meta-data for the columns in a database table and then I built a C# application to read in the meta-data and then modify the *.dtsx file and set the meta data on all columns. This is the best solution that I am aware of to automate the task.
Unfortunately, I don't have much experience with pulling data through ODBC. Are you pulling from an Access database? Or, what are you pulling from?

Talend ETL tool

I am developing a migration tool and using Talend ETL tool (Free edition).
Challenges faced:-
is it possible to create a Talend job that uses dynamic schema every time it runs i.e. no hard-coded mappings in tMap component.
I want user to give a input CSV/Excel file and the job should create mappings on the basis of that input file. Is it possible in talend?
Any other free source ETL tool can also be helpful, or any sample job.
Yes, this can be done in Talend but if you do not wish to use a tMap then your table and file must match exactly. The way we have implemented it is for stage tables which are all datatype of varchar. This works when you are loading raw data into a stage table, and your validation is done after the load, prior to loading the stage data into a data warehouse.
Here is a summary of our method:
the filenames contain the table name so the process starts with a tFileList and parsing out the table name from the file name.
using tMSSQLColumnList obtain each column name, type, and length for the table (one way is to store it as an inline table in tFixedFlowInput)
run this thru a tSetDynamicSchema to produce your dynamic for that table
use a file input reference the dynamic schema.
load that into a MSSQLOutput again referencing the dynamic schema.
One more note on data types. It may work with data types than varchar, but our stage tables only have varchar and datetime. We had issues with datetime, so we filtered out those column types with a tMap.
Keep in mind, this is a summary to point you in the right direction, not a precise tutorial. But with this info in your hands, it can save you many hours of work while building your solution.

SSIS - using VB to dynamically create ole db source where the SQL source is too dynamic

I'm very new to SSIS, but have this week been completely burried in turorials of how to use it, this has been very fruitful so far but now I'm stuck with what seems a more complicated task.
I have two source SQl tables, which are dynamically created using the TSQL pivot function - so the numebr of columns in each will vary.
I feed a list of criteria into a for each loop whic his set to use an ADO enumerator to go through this data set and has a script task to set the string variable SQL_DOWNLOAD_STRING to point to one of the two poential tables using an IF statement. TABLEA or TABLEB.
I tried using the standard OLE DB source in the data flow, but the associated meta data was generated based on TABLEA at the time of build. I then set the SQL statement to be based on the variable SQL_DOWNLOAD_STRING and when I execute, the meta data of the source has changed as obviously it is a differnt table. That said even if itwas the same table there is no guarentee that the columns would be the same.
So my question is - how do I create the OLE DB Source dynamically so that it reconfigures to the new structure of the table?
Then I'm trying to export the data to an Excel workbook (which again I can do for a static source) but in this instance the mapping would need to change too & I imagine I'll need to rebuild the destination table to match the source from above.
If I can get a working example - then finishing the project should be pretty staright forward from there - examples for this sort of thing seem pretty limited though :(
Please help! (PS I'm working in VB)
Thanks

Few questions from a Java programmer regarding porting preexisting database which is stored in .txt file to mySQL?

I've been writing a Library management Java app lately, and, up until now, the main Library database is stored in a .txt file which was later converted to ArrayList in Java for creating and editing the database and saving the alterations back to the .txt file again. A very primitive method indeed. Hence, having heard on SQL later on, I'm considering to port my preexisting .txt database to mySQL. Since I've absolutely no idea how SQL and specifically mySQL works, except for the fact that it can interact with Java code. Can you suggest me any books/websites to visit/buy? Will the book Head First with SQL ever help? especially when using Java code to interact with the SQL database? It should be mentioned that I'm already comfortable with using 3rd Party APIs.
View from 30,000 feet:
First, you'll need to figure out how to represent the text file data using the appropriate SQL tables and fields. Here is a good overview of the different SQL data types. If your data represents a single Library record, then you'll only need to create 1 table. This is definitely the simplest way to do it, as conversion will be able to work line-by-line. If the records contain a LOT of data duplication, the most appropriate approach is to create multiple tables so that your database doesn't duplicate data. You would then link these tables together using IDs.
When you've decided how to split up the data, you create a MySQL database, and within that database, you create the tables (a database is just something that holds multiple tables). Connecting to your MySQL server with the console and creating a database and tables is described in this MySQL tutorial.
Once you've got the database created, you'll need to write the code to access the database. The link from OMG Ponies shows how to use JDBC in the simplest way to connect to your database. You then use that connection to create Statement object, execute a query to insert, update, select or delete data. If you're selecting data, you get a ResultSet back and can view the data. Here's a tutorial for using JDBC to select and use data from a ResultSet.
Your first code should probably be a Java utility that reads the text file and inserts all the data into the database. Once you have the data in place, you'll be able to update the main program to read from the database instead of the file.
Know that the connection between a program and a SQL database is through a 'connection program'. You write an instruction in an SQL statement, say
Select * from Customer order by name;
and then set up to retrieve data one record at a time. Or in the other direction, you write
Insert into Customer (name, addr, ...) values (x, y, ...);
and either replace x, y, ... with actual values or bind them to the connection according to the interface.
With this understanding you should be able to read pretty much any book or JDBC API description and get started.