Saving data in datastore then in table-vba - vba

Is it possible to save all data firstly in a data structure and then save this data structure in a table in data base.
I have a process, in which I should write rows very often in a table. I want to save these rows in a data structure, and then save this datastucture only one time in table in access

Sure, you can create your own data structures by using
the Type statement or
by creating Class Modules.
The former is simpler, the latter more flexible.
Unfortunately, there is no built-in way to serialize your data structure into a table row, so you will have to write that code yourself, either by using an INSERT statement or a parameterized append query.

Related

Mapping multiple layouts from a working SQL table - SSIS

I have a flat file as an input that has multiple layouts:
Client# FileType Data
------- -------- --------------------------------------
Client#1FileType0Dataxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Client#1FileType1Datayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
Client#1FileType2Datazzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz
Client#2FileType0Dataxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
My PLANNED workflow goes as follows: Drop Temp table -Load SQL temp table with columns Client#, FileType, Data and then from there, map my 32 file types to actual permanent SQL table.
My question is, is that even doable and how would you proceed?
Can you, from such a working table, split to 32 sources? With SQL substrings? I am not sure how I will map my columns from the differing file type from my temp table, what 'box' to use in my workflow.
What you are describing is a very reasonable approach to loading data in a database. The idea is:
Create a staging table where all the columns are strings.
Load data into the final table, using SQL manipulations.
The advantage of this approach is that you can debug any data anomalies in the database and that generally makes things much simpler.
The answer to your question is that the following functions are generally very useful in doing this:
substring()
try_convert()
This can get more complicated if the "data" is not fixed width. In that case, you would have to use more complex string processing. In that case, recursive CTEs or JSON functionality might help.

Return dynamically-typed table rows via RFC?

I need to return the rows of some tables via RFC, and the names of these tables are not known before the execution.
I have this statement which gets executed in a loop:
SELECT *
up to iv_max_count rows
into table <lt_result>
FROM (iv_table_name) AS ltab
WHERE (SQL_WHERE).
How can I concatenate the <lt_result> results to one list/table and return this via RFC?
Of course all tables can have different structures. Creating one big table which holds all rows does not help.
You can't return an arbitrary structure or structures in an RFC, they have to be predefined.
The best way I can think of to do this is to mimic the way SAP handles idocs in the database. Your table would need a minimum of two fields, the first would be a descriptor field telling the caller what the table structure is, and the second field would be a very long character type field with all the data concatenated together, either fixed width or delimited. This way, you could pass data from multiple tables in the same return structure.
If your calling program really knows nothing about SAP data sets, you would probably also need to grab metadata from table DD02l.
In short, that's not how ABAP and function modules work.
You have to define exactly what your input is and what your output structure/table looks like. You can return one structure that holds multiple deep nested tables, to have only one return structure, but not dynamically!
Making this all dynamic makes things a lot more complex. Mostly unnecessarily.
One possible way:
you have to anaylize the input and build dynamic structures and tables for each input table result
build a wrapping structure that consists of all the nested tables
return a DATA reference object, because you cannot return generic datatypes
your receiving program needs to have the same data structures defined, this means it must exactly know what it is getting back, to defer the data.
Another way:
Use Function Module RFC_READ_TABLE in a loop in the caller program
Reading multiple single tables dynamically in a loop without a join does not sound like ABAP programming, more like "I need data from SAP in a third party tool".

Copy tables from query in Bigquery

I am attempting to fix the schema of a Bigquery table in which the type of a field is wrong (but contains no data). I would like to copy the data from the old schema to the new using the UI ( select * except(bad_column) from ... ).
The problem is that:
if I select into a table, then Bigquery is removing the required of the columns and therefore rejecting the insert.
Exporting via json loses information on dates.
Is there a better solution than creating a new table with all columns being nullable/repeated or manually transforming all of the data?
Update (2018-06-20): BigQuery now supports required fields on query output in standard SQL, and has done so since mid-2017.
Specifically, if you append your query results to a table with a schema that has required fields, that schema will be preserved, and BigQuery will check as results are written that it contains no null values. If you want to write your results to a brand-new table, you can create an empty table with the desired schema and append to that table.
Outdated:
You have several options:
Change your field types to nullable. Standard SQL returns only nullable fields, and this is intended behavior, so going forward it may be less useful to mark fields as required.
You can use legacy SQL, which will preserve required fields. You can't use except, but you can explicitly select all other fields.
You can export and re-import with the desired schema.
You mention that export via JSON loses date information. Can you clarify? If you're referring to the partition date, then unfortunately I think any of the above solutions will collapse all data into today's partition, unless you explicitly insert into a named partition using the table$yyyymmdd syntax. (Which will work, but may require lots of operations if you have data spread across many dates.)
BigQuery now supports table clone features. A table clone is a lightweight, writeable copy of another table
Copy tables from query in Bigquery

SSIS Moving data from one place to another

I was asked by the company I work for, to create SSIS that will take data from few tables in one data source and change few things in the data, then put it in few tables in the destination.
The main entity is "Person". In the people table, each person has a PersonID.
I need to loop on these records and for each person, take his names from the orders from the orders table, and other data from few other tables.
I know how to take data from one table and just move it to a different table in the destination. What I don't know is how to manipulate the data before dumping it in the destination. Also, how can i get data from few tables for each person id?
I need to be done with this very fast, so if someone can tell me which items in ssis i need to use and how, that will be greate.
Thanks
Microsoft has a few tutorials.
Typically it is easy to simply do your joins in SQL before extracting and use that query as the source for extraction. You can also do data modification in that query.
I would recommend using code in SSIS tasks for only things where SQL is problematic - custom scalar functions which can be quicker in the scripting runtime and handling disparate data sources.
I would start with the Data Flow Task.
Use the OledbSource to execute a stored proc that will read, manipulate and return the data you need.
Then you can pass that to either an OleDBDestination or an OleDBCommand that will move that to the destination.

Database redesign then reload approach using linq to excel and entity framework

So I've got a few tables with a lot of data. Let's say they are table A, B and C. I want to add auto increment ID fields for each of those tables, normalize them by swapping some fields between the tables and add an additional table D. Gulp. There are three goals: 1) redesign the database, and reload the existing data. 2) Enable a data load from a spreadsheet to add/edit/delete the four tables. 3) Enable a web front end to add/edit/delete the four tables.
My current approach:
I thought I would export a flat file for all the data in the 3 existing tables into a csv (spreadsheet).
Then refactor the database design structure
Then use linq to excel to read back the csv spreadsheet records into dto objects
Then use the Entity Framework to transform those dto objects into Entities to update the database with the appropriate relationships between tables
The spreadsheet would be re-used for future bulk data add/edit/deletes
What about the following tools?
SSIS
Bulk insert
Stored procedures
Am I over complicating this? Is there a better way?
What's your definition of "a lot of data"? For some people it's 10,000 rows, for others it's billions of rows.
I would think that a few stored procedures should be able to do the trick, mostly made up of simple INSERT..SELECT statements. Use sp_rename to rename the existing tables, create your new tables, then move the data over.
If you already have to develop a bulk import process then it might make sense to get reuse out of that by doing an export, but I wouldn't create that whole process just for this scenario.
There might be cases where this approach isn't the best, but I don't see anything in your question that makes me think it would be a problem.
Make sure that you have a good back-up first of course.