RFT Datapool Setting - rft

Does RFT have limitations on how much data can be set into the form, using datapool?
We are reading data from CSV files and it contains 70 columns each for two users, and data is set into forms. We are supposed to get 227 rows for each user in database after processing the page (i.e after running), but we are getting 227 rows for one user and only 87 rows for the other.
On the other hand, when we do it with other type of form (which has 46 columns in CSV files), it gets executed correctly and enter 163 rows per user into the database, which is correct.
Is there any setting for the limit of a datapool?

RFT datapools don't have such a tiny limit, so it must be something related with your script.
You can try to write a simple script which iterates your "large" datapool and print each record to System.out to confirm this.

Related

Export multiple files with InDesign Data Merge

I am creating progress reports around 50 different companies in InDesign. The report is 10 pages long and has approximately 40 images and text fields that need to change based on the company.
I set up a data merge in InDesign and mapped all of the text and image fields. When I execute the data merge the text and images are mapping perfectly but it's creating one large 500 page report (10 pages x 50 companies). I.e. Report for Company A is on pages 1-10, report for Company B is on pages 11-20, and so on.
While I could break this up into individual reports in AcrobatPro, this step seems like it should be unnecessary. How can this be automated, preferably within InDesign? And how would I then be able to save each file based on a field in the merge csv?
I agree with Nicolai, I don't think default data merge option can create split documents.
Maybe you can use the following script to split your documents into the parts
https://creativepro.com/free-script-splits-long-indesign-files/

Reading metadata CSV from a datalake, too big for a lookup activity

I need to create a pipeline to read CSVs from a folder, load from Row 8 into an Azure SQL table, Frist 5 rows will go into a different table ([tblMetadata]).
So far I have done it using Lookup Activity, works fine, but one of the files is bigger than 6 MB and it fails.
I checked all options in Lookup, read everything about Copy Activity (which I am using to load main data - skip 7 rows). The pipeline is created using GUI.
The output from the Lookup is used as parameters for a Stored Procedure to insert into tblMetadata
Can someone advise me how to deal with this? At the moment I am on the training, no one can help me on site.
You could probably do this with a single Data Flow activity that has a couple of transformations.
You would use a Source transformation that reads from a folder using folder paths and wildcards, then add a conditional split transformation to send different rows to different sinks.
I did workaround in different way, modified CSVs that are bing imported to have whole Metadata in the first row (as this was part of my different project). Then used FirstRow only in Lookup.

Import Unformatted txt file into SQL

I am having an issue importing data into SQL from a text file. Not because I don't know how...but because the formatting is pretty much terrible for this purpose. Below is an altered sample of the types of text files I need to work with:
1 VA - P
2 VB to 1X P
3 VC to 1Y P
4 N - P
5 G to 1G,Frame P
6 Fout to 1G,Frame P
7 Open Breaker P
8 1B to 1X P
9 1C to 1Y P
Test Status: Pass
Hi-Pot # 1500V: Pass
Customer Order:904177-F
Number: G4901626-200
Serial Number: J245F6-2D03856
Catalog #: CBDC37-X5LE30-H40-L630C-4GJ-G31
Operator: TGY
Date: Aug 01, 2013
Start Time: 04:09:26
Finish Time: 04:09:33
The first 9 lines are all specific test results (tab separated), with header information below. My issue is that I need to figure out:
How can I take the data above and turn it into something broken down into a standard column format to import into SQL?
How can I then automate this such that I can loop through an entire folder structure?
-What you see above is one of hundreds of files divided into several sub-directories.
Also note that the # of test lines above the header information vary from file to file. The header information remains in much the same format though. This is all legacy data that cannot be regenerated, but needs to be imported into our SQL databases.
I am thinking of using an SSIS project with a custom script to import the data...splicing the top section from the bottom by looking for the first empty row...then pivot the data in the header into column format...merge...then move on. But I don't write much VB and I'm not sure how to approach that.
I am working in a SQL Server 2008R2 environment with access to BIDS.
Thoughts?
I would start by importing the data as all character into a table with a single field, one record per line. Then, from that table, you can parse each record into the fields and field types appropriate for each line. Hopefully there is a way to figure out what kind of data each line is, whether each file is consistant in order, or the header record indicates information about subsequent lines. From that, the data can be moved to a final (parsing may take more than one pass) table with the data stored in a format that is useable for whatever you need it.
I would first concentrate on getting the data into the database in the least complicated (and least error prone) way possible. Create a table with three columns: filename, line_number and line_data. Plop all of your files into that table and then you can start to think about how to interpret the data. I would probably be looking to use PIVOT, but if different files can have different numbers of fields it may introduce complications.
I would use a different approach and use SSDT/SSIS package to import the data.
Add a script component to read in the text file and convert it to XML. Not hard there many examples on the web. In your script Store the XML you develop into a variable.
Add a data flow
Add an XML Source. In the XML source you can select the XML variable you created and process either group of data present in your file. Here is some information on using the XML Source.
Add destination task to import it to a destination of your choice
This solution assumes your input lines are terminated {CR}{LF}, the normal Windows way.
Tell MSSQL's Import/Export Wizard to import a Flat File; the Format is "Delimited"; the "Text Qualifier" is the {CR}; the "Header Row Delimiter" is the {LF}; and the OutputColumnWidth (in "Advanced") is a bit more than the longest possible line length.
It's simple and it works.
I just used this to import 23 million lines of mixed up data, and it took less than ten minutes. Now to edit it...

Updating information in an Access table linked to SharePoint

I have a table in Access linked to a SharePoint list. The table is comprised of about 15 fields whose contents are originally pulled from another data source (in Excel format). There are an additional 10 or so fields after the original 15 that make up a questionnaire (added via SharePoint) that contain answers to questions about the first 15 fields.
The data in the first 15 fields needs to be updated periodically when new data from my external source is available to download. A lot of the information will remain the same, however some of the fields within each of the rows will change and need to be updated. It is also important that the 10 fields that contain the questionnaire are not modified at all during this process.
Is there a way for me to easily update the cells that have changed using an Update query or something similar? The data does have a unique identifier column (ID NUMBER) that is present on the current SharePoint list and the external data source.
I was thinking from a logical standpoint to put the new external data into a table, find the ID Number in the SP list and new external data, compare the values in the rest of the row on the SP list to the row of the external data, and if a value is different update the cell with the value from the external data. Not sure how to accomplish this using Access queries though.
I really appreciate any help at all! If you need more information, please let me know. If you think there's a more logical way to do this, please let me know your feedback!!
Here's how to get started:
http://workerthread.wordpress.com/2009/02/03/using-access-2007-to-update-sharepoint-lists/
After you get the connection set up, it's just a matter of writing the queries correctly. If you need to run multiple queries periodically, you can setup a form with buttons, and attach some VBA code to the buttons that runs the queries.
MS Access - execute a saved query by name in VBA

Crystal Reports parameter selection limit?

I'm trying to make a Crystal Reports 11 report off an Oracle database that's grouped by user. I've got over one thousand users. I want to create a parameter field that prompts the person to select which users they would like to view the results for. However my parameter selection field is only showing 221 of the possible users. The users appear in alphabetical order because of the SQL command's Order By statement. I'm wondering if there is a limit to the number of dynamic default values that a parameter field can store. Any help with this would be great.
The default limit in Crystal 11 appears to be 1000 (held in the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Business Objects\Suite 11.0\Crystal Reports\DatabaseOptions\LOV\MaxRowsetRecords), so your problem may lie in the construction of the parameter field itself. Make sure it is a dynamic parameter field that will query the database when used, as the odd number of values shown makes me think this was the list generated when the report was first run and saved, and therefore a static parameter list.
I tried to reproduce your situation using a database table with 5,000+ records.
I created a dynamic parameter (Crystal Reports XI R3, full version) on this field. It generated a unique list of values that appeared to be about 1,000 (I started to count it, but estimated its total size based on the scroll bar's position).
I added the registry entry HKEY_LOCAL_MACHINE\SOFTWARE\Business Objects\Suite \Crystal Reports\DatabaseOptions\LOV\MaxRowsetRecords (String) and set its value to -1.
With the registry entry change, the LoV included all values.
When the LoVs had the record limitation, it appeared to sample values from each letter of the alphabet indiscriminately. Maybe this is what you are encountering.
221 sounds awfully low to be the default selection limit, at least to me. But there is a way to increase the number of records that these dynamic parameters pull in. It involves editing the registry.
See Here
The following is for Crystal Reports 2013.
Add a new registry entry under:
For 32 Bit computer
HKEY_HKEY_LOCAL_MACHINE\SOFTWARE\SAP BusinessObjects\Suite XI 4.0\Crystal Reports\DatabaseOptions
For 64 Bit Computer (Wow6432Node sub note)
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\SAP BusinessObjects\Suite XI 4.0\Crystal Reports\DatabaseOptions
Add a new key at that level called: \LOV
Add a string called MaxRowsetRecords
Set the value to whatever limit you want, I have selected 100000 in the development. (0 or -1, meaning all values, is no longer supported. )
After making changes to the registry, restart the affected service or application as required.
Just to clarify that the MaxRowsetRecords value does NOT refer to the number of distinct parameter values you can select from. It is the maximum number of records that Crystal will look at from which it extracts the dynamic parameter values. Most likely if you look at your query you will find that there are 221 unique values in the first 1000 records. Apply the registry change to a larger number and you should see more parameter values from which you can choose.