I have a task to add selected rows from alv grid to the transport request.
At this moment I already have:
Name of transport request
Selected rows (I put them in a table because I don't know what the type they should be if I want to put them into the transport request):
First I get indexes:
call method grid->get_selected_rows
importing
et_index_rows = lt_rows.
Second I get rows that I need and put them into a new table:
if lt_rows is not initial.
loop at lt_rows into ls_row.
read table lt_variable index ls_row into ls_variable.
append ls_variable to lt_variable_changed.
endloop.
endif.
As I understand I need to use all of this in the function TR_OBJECTS_INSERT, but unfortunately I didn't get any information that could help me to understand that I did it correctly.
What is the critical need of transporting data in runtime? It is unstable and not recommended.
Just create customizing table in Data Dictionary and insert necessary ALV grid rows to it in runtime.
Then use transport with object type R3TR-TABU to move that customizing table to another system. Do it in batches, not by pieces of 2 or 3 rows like you want.
Here is the full tutorial.
But it's a bad practice to do like this. Replicating data across the landscape in a regular manner is a BASIS task and should be done by BASIS, not by developer.
And replicating rows of business data in runtime is a horrible practice.
Related
I have an Azure SQL database, and my records inside table Spiderfood_RITMData in that database includes 13 different fields. Lots of stuff. I have confirmed in SQL-SMS that the records have data in each field.
There are way more items in the database than PowerApps can see using LOOKUP (1600-9000 records or more). However, I know FOR A FACT that there is only ONE record that has any given value in the NUMBER column. It's not a primary key, but it is unique in the table.
In PowerApps, I am trying to pull that field so that I can eventually parse out the individual items.
So, the commands I'm trying are:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" in Number));
ClearCollect(MLE_test2, Search('Spiderfood_RITMData',"RITM2170467", "Number"));
However, the Collection results for MLE_test1 and MLE_test2 both are empty EXCEPT for the value of NUMBER. Say what?!
I'm trying to use the examples posted on https://learn.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-filter-lookup but I am honestly getting baffled by this.
How should I be formatting this call such that I can pull the whole record?
Big picture explanation: I need to do a lot of data LOOKUPS into my table Spiderfood_RITMData table, but it has way more than 2000 rows, and PowerApps will not perform the Lookup correctly. So my presumably smart idea is to create a MUCH SMALLER "version" of Spiderfood_RITMData as a local collection, using a more delegateable function (such as FILTER or IN). If I filter by all records containing the values of NUMBER, then I go from, say a 10,000-record SQL table to a 10-record Collection. And I can do LOOKUPS against that collection for the rest of the function (uh, I think -- I'm still trying to experiment accordingly). Please let me know if this is crazy or not.
LookUp is just used to get one record, instead try this:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" = Number));
This gets a collection with all the items where Number is = to "RITM2170467"
Collections are limited to only 2000 records in each collections.
I had same issue. Go to App settings. Under Upcoming Features make sure Explicit column selection is turned off. Hope this does it for you.
I am working on a transformation step for Pentaho Kettle. It selects several input columns and based on that adds two new columns during transformation. I am unable to understand (based on code from other plugins), how I can add the two new columns so that 1) steps downstream are aware of these columns and 2) i can push the transformed data into these columns.
Thanks in advance.
You might need to override meta.getStepFields() to add new ValueMetaInterface objects to the RowMetaInterface passed in. This is the standard way to add columns at runtime; however, the row's metadata (i.e. list of ValueMetaInterface objects) must be the same from row to row or else the next step in your transformation will complain.
Often when doing data-driven custom plugins, you consume as many rows as you need (using getRow()) in order to figure out what the outgoing row format/metadata will be, then you can construct a RowMetaInterface (usually using meta.getStepFields()) that will be passed into the putRow() call. If you intend to pass through the incoming fields, do something like:
RowMetaInterface outputRowMeta = getInputRowMeta().clone();
If you're creating new rows use this:
RowMetaInterface outputRowMeta = new RowMeta();
Either way when you call meta.getStepFields(outputRowMeta, ...) it should populate outputRowMeta with the appropriate fields, by adding/changing/removing ValueMetaInterface objects from outputRowMeta.
I've got a blog post using Groovy to add/replace fields in the incoming rows here:
http://funpdi.blogspot.com/2014/10/flatten-json-to-key-value-pairs-in-pdi.html
Not sure if that is similar to your use case or not. If you have more questions, feel free to find me on IRC at ##pentaho (my nick is usually mburgess_pdi)
IF i have understood your question correctly, i think you are trying to create an output file with dynamic column. So you can do this by checking on the "fast dumping" option in Text File Output Step. While doing so , donot define any column names in the "Fields" tab
Check my image below:
Hope it helps :)
I have a table in Access linked to a SharePoint list. The table is comprised of about 15 fields whose contents are originally pulled from another data source (in Excel format). There are an additional 10 or so fields after the original 15 that make up a questionnaire (added via SharePoint) that contain answers to questions about the first 15 fields.
The data in the first 15 fields needs to be updated periodically when new data from my external source is available to download. A lot of the information will remain the same, however some of the fields within each of the rows will change and need to be updated. It is also important that the 10 fields that contain the questionnaire are not modified at all during this process.
Is there a way for me to easily update the cells that have changed using an Update query or something similar? The data does have a unique identifier column (ID NUMBER) that is present on the current SharePoint list and the external data source.
I was thinking from a logical standpoint to put the new external data into a table, find the ID Number in the SP list and new external data, compare the values in the rest of the row on the SP list to the row of the external data, and if a value is different update the cell with the value from the external data. Not sure how to accomplish this using Access queries though.
I really appreciate any help at all! If you need more information, please let me know. If you think there's a more logical way to do this, please let me know your feedback!!
Here's how to get started:
http://workerthread.wordpress.com/2009/02/03/using-access-2007-to-update-sharepoint-lists/
After you get the connection set up, it's just a matter of writing the queries correctly. If you need to run multiple queries periodically, you can setup a form with buttons, and attach some VBA code to the buttons that runs the queries.
MS Access - execute a saved query by name in VBA
We get weekly data files (flat files) from our vendor to import into SQL, and at times the column names change or new columns are added.
What we have currently is an SSIS package to import columns that have been defined. Since we've assigned the mapping, SSIS only throws up an error when a column is absent. However when a new column is added (apart from the existing ones), it doesn't get imported at all, as it is not named. This is a concern for us.
What we'd like is to get the list of all the columns present in the flat file so that we can check whether any new columns are present before we import the file.
I am relatively new to SSIS, so a detailed help would be much appreciated.
Thanks!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that reads the flat file using the file system object and a StreamReader object, and looks at the columns, which are hopefully named in the first line of the file.
However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
I agree with the answer provided by #TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination).
May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email.
The "Get Header Row" DataFlow would look like this. Row Number if needed.
The Control Flow level would look like this. Use a ForEach ADO RecordSet object to assign the header row value to an SSIS variable CurrentHeader..
Above, the precedent constraints (fx icons ) of
[#ExpectedHeader] == [#CurrentHeader]
[#ExpectedHeader] != [#CurrentHeader]
determine whether you load data or send email.
Hope this helps!
i have worked for banking clients. And for banks to randomly add columns to a db is not possible due to fed requirements and rules. That said I get your not fed regulated bizz. So here are some steps
This is not a code issue but more of soft skills and working with other teams(yours and your vendors).
Steps you can take are:
(1) reach a solid columns structure that you always require. Because for newer columns older data rows will carry NULL.
(2) if a new column is going to be sent by the vendor. You or your team needs to make the DDL/DML changes to the table were data will be inserted. Ofcouse of correct data type.
(3) document this change in data dictanary as over time you or another member will do analysis on this data and would like to know what is the use of each attribute or column.
(4) long-term you do not wish to keep changing table structure monthly because one of your many vendors decided to change the style the send you data. Some clients push back very aggresively other not so much.
If a third-party tool is an option for you, check out CozyRoc's Data Flow Task Plus. It handles variable columns in sources.
SSIS cannot make the columns dynamic,
one thing, i always do, is use a script task to read the first and last lines of a file.
if it is not an expected list of csv columns i mark file as errored and continue/fail as required.
Headers are obviously important, but so are footers. Files can through any unknown issue be partially built. Requesting the header be placed at the rear of the file it is a double check.
I also do not know if SSIS can do this dynamically, but it never ceases to amaze me how people add/change order of columns and assume things will still work.
1-SSIS Does not provide dynamic source and destination mapping.But some third party component such as Data flow task plus , supporting this feature
2-We can achieve this using ssis script task.
3-If the Header is correct process further for migration else fail the package before DFT execute.
4-Read the line from the header using script task and store in array or list object
5-Then compare those array values to user defined variables declare earlier contained default value as column name.
6-If values are matching exactly then progress further else fail it.
The problem is as follows, a group of users (~50k) must be filtered from a DB, four fields for each user must be saved into a variable, then a second process will take each user and proceed to enable some licences into another system/platform. Both processes will be developed into the same application.
My first attempt was basically a query looping through the users but I wonder if thinking in objects is a better approach.
I was thinking in a structure inside an object to hold the 4 parameters, then pass each user object to the other object however considering the amount of data I'm not sure if this is fine.
Thanks,
ps. newbie using vb.net and framework 3.5
if you read the dataset directly from DB then table columns will have the type of original table in DB,,, If some the filed's its bool, int string then no need to create objects from that dataset and you can make your filters using Data view and then pass this data view to the other process...