fast parse property of flat file source - sql

I am using the flat file source for a large data migration and the source data in the text stream form unlike UI, datetime or sting. The component is not supporting for fast parsing for text stream.
Could I get any ideas to improve fast performance in this scenario.
thanks
prav

As you've seen fast parse does not support strings. It only supports integers, date and time and then with caveats
The first thing I would do is ensure that you're using the smallest data types you can in your flow definition (WSTR rather than NTEXT for example if you're strings < 4000 characters).

This problem has solved by taking DT_STR instead of DT_TEXT by chaning my DB design for better performance issue. I got 1 million rows transfer in 13 sec. Which is required for my business logic.
thanks
prav

Related

What is the benefit of the one "DATE" datatype over another in Laravel/SQLite?

In my app, I'm just using a SQLlite database for development. Now in the migration, I declare a DATE datatype which laravel seems to handle without any problem, and in the database itself creates it as a varchar.
According to this nice article (http://www.sqlitetutorial.net/sqlite-date/) SQLite has basically got three options for handling dates:
Using the TEXT storage class for storing SQLite date and time Using
REAL storage class to store SQLite date and time values
Using INTEGER to store SQLite date and time values
So as I'm trying to formulate my approach, I'm thinking ahead that I will likely end up, at some point, need to step up and move to a higher performance SQL database (mySQL / Postgres / etc. ) And then may have datatype translation challenges.
But then also, at the application layer, Laravel itself has some manipulations.
Now, the question I'm asking is this, What is the benefit of one type over another? Is there some kind of reason to choose one type over another? My thinking is that TEXT is nice and human-readable for backend support, but it may require addiotnal coding to manipulate strings.
INTEGERS are probably more efficient, and would be translatable to a bigger SQL server easier than text.
Does anyone know of a comparison of the pro's and con's of various choices?
Any advice? Thanks in advance.
The size of integer is 4 bytes. The size of a letter in text is 1 byte.
To represent date and time you need 1 UTC number when you use integer. So its much better to user 4 bytes of integer than using 8 bytes of text. I dont see how real can be better than integer for the exact same reason. I would say you should use integer.

Table VS xml / json / yaml - table requires less storage if data is any related? more efficient than compression

To add a field to a XML object it takes the length of the fieldname +
3 characters (or 7 when nested) and for JSON 4 (or 6 when nested)
<xml>xml</xml> xml="xml"
{"json":json,} "json": json,
Assume the average is 4 and fieldname average is 11 - to justify the use of XML/JSON over a table in use of storage, each field must in average only appear in less than 1/15 of objects, in other words there must be ~15 times more different fields within the whole related group of objects, than one object has in average.
(Yet a table may very well allows faster computation still when this ration is higher and its bigger in storage) I have not yet seen a use of XML/JSON with a very high ratio.
Aren't most real of XML/JSON forced and inefficient?
Shouldn't related data be stored and queried in relations (tables)?
What am i missing?
Example conversion XML to table
Object1
<aaaaahlongfieldname>1</aaaaahlongfieldname>
<b>B
<c>C</c>
</b>
Object2
<aaaaahlongfieldname>2</aaaaahlongfieldname>
<b><c><d>D</d></c></b>
<ba>BA</ba>
<ba "xyz~">BA</ba>
<c>C</c>
Both converted to a csv like table (delimiter declaration,head,line1,line2)
delimiter=,
aaaaahlongfieldname,b,b/c,b/c/d,ba,ba-xyz~,c
,B,C,,,,
,,,D,BA,BA,C
/ and - symbols in values will need to be escaped only in the head
but ,,,, could also be \4 escaped number of delimiters in a row (when an escape symbol or string is declared as well - worth it at large numbers of empty fields ) and since escape character and delimiter will need to be escaped when they appear in values, they could automatically be declared rare symbols that usually hardly appear
escape=~
delimiter=°
aaaaahlongfieldname°b°b/c°b/c/d°ba°ba-xyz~~°c
°B°C~4
°°°D°BA°BA°C
Validation/additional info: XML/json misses all empty fields so missing "fields in "rows can not be noticed. A line of a table is only valid when the number of fields is correct and (faulty) lines must be noticed. but through columns having different datatypes missing delimiters could usually easily be repaired.
Edit:
On readablity/editablity: Good thing of course, the first time one read xml and json it maybe was selfexplanatory having read html and js already but that's all? - most of the time it is machines reading it and sometimes developers, both of which may not be entertained by the verbosity
The CSV in your example is quite inefficient use of 8 bit encoding. You're hardly even using 5 bits of entropy, clearly wasting 3 bits. Why not compress it?
The answer to all of these is people make mistakes, and stronger typing trades efficiency for safety. It is impossible for machine or human to identify a transposed column in a CSV stream, however both JSON & XML would automatically handle it, and (assuming no hierarchy boundaries got crossed) everything would still work. 30 years ago when storage space was scarce & instructions per second were sometimes measured 100s per second, using minimal amounts of decoration in protocols made sense. These days even embedded systems have relatively vast amounts of power & storage, thus the tradeoff for a little extra safety is much easier to make.
For tightly controlled data transfer, say between modules that my development team is working on, JSON works great. But when data needs to go between different groups, I strongly prefer XML, simply because it helps both sides understand what is happening. If the data needs to go across a "slow" pipe, compression will remove 98% of the XML "overhead".
The designers of XML were well aware that there was a high level of redundancy in the representation, and they considered this a good thing (I'm not saying they were right). Essentially (a) redundancy costs nothing if you use data compression, (b) redundancy (within limits) helps human readability, and (c ) redundancy makes it easier to detect and diagnose errors, especially important when XML is being hand-authored.

Best approach for bringing 180K records into an app: core data: yes? csv vs xml?

I've built an app with a tiny amount of test data (clues & answers) that works fine. Now I need to think about bringing in a full set of clues & answers, which roughly 180K records (it's a word game). I am worried about speed and memory usage of course. Looking around the intertubes and my library, I have concluded that this is probably a job for core data. Within that approach however, I guess I can bring it in as a csv or as an xml (I can create either one from the raw data using a scripting language). I found some resources about how to handle each case. What I don't know is anything about overall speed and other issues that one might expect in using csv vs xml. The csv file is about 3.6 Mb and the data type is strings.
I know this is dangerously close to a non-question, but I need some advice as either approach requires a large coding commitment. So here are the questions:
For a file of this size and characteristics, would one expect csv or
xml to be a better approach? Is there some other
format/protocol/strategy that would make more sense?
Am I right to focus on core data?
Maybe I should throw some fake code here so the system doesn't keep warning me about asking a subjective question. But I have to try! Thanks for any guidance. Links to discussions appreciated.
As for file size CSV will always be smaller compared to an xml file as it contains only the raw data in ascii format. Consider the following 3 rows and 3 columns.
Column1, Column2, Column3
1, 2, 3
4, 5, 6
7, 8, 9
Compared to it's XML counter part which is not even including schema information in it. It is also in ascii format but the rowX and the ColumnX have to be repeated mutliple times throughout the file. Compression of course could help fix this but I'm guessing even with compression the CSV will still be smaller.
<root>
<row1>
<Column1>1</Column1>
<Column2>2</Column2>
<Column3>3</Column3>
</row1>
<row2>
<Column1>4</Column1>
<Column2>5</Column2>
<Column3>6</Column3>
</row2>
<row3>
<Column1>7</Column1>
<Column2>8</Column2>
<Column3>9</Column3>
</row3>
</root>
As for your other questions sorry I can not help there.
This is large enough that the i/o time difference will be noticeable, and where the CSV is - what? 10x smaller? the processing time difference (whichever is faster) will be negligible compared to the difference in reading it in. And CSV should be faster, outside of I/O too.
Whether to use core data depends on what features of core data you hope to exploit. I'm guessing the only one is query, and it might be worth it for that, although if it's just a simple mapping from clue to answer, you might just want to read the whole thing in from the CSV file into an NSMutableDictionary. Access will be faster.

Pre-allocate varbinary(max) without actually sending null data to the SQL Server?

I'm storing data in a varbinary(max) column and, for client performance reasons, chunking writes through the ".WRITE()" function using SQL Server 2005. This works great but, due to the side effects, I want to avoid the varbinary column dynamically sizing during each append.
What I'd like to do is optimize this by pre-allocating the varbinary column to the size I want. For example if I'm going to drop 2MB into the column I would like to 'allocate' the column first, then .WRITE the real data using offset/length parameters.
Is there anything in SQL that can help me here? Obviously I don't want to send a null byte array to the SQL server, as this would partially defeat the purpose of the .WRITE optimization.
If you're using a (MAX) data type, then anything above 8K goes into row overflow storage, not the in-page storage. So you just need to put in enough data to get it up to the 8K for the row, making that take up the in-page allocation for the row, and the rest goes into row-overflow storage anyway. There's some more here.
If you want to pre-allocate everything, including the row overflow data, you can use something akin to (example does 10000 bytes):
SELECT CONVERT([varbinary](MAX), REPLICATE(CONVERT(varchar(MAX), '0'), 10000))
First of all kudos to the answer provided - this was a great help! However, there is one slight change that you may want to consider. The code above actually allocates the varbinary field with a converted zero character (hex code 0x30). This may not be what you actually want, particularly if you want to perform binary operations on the field later. What I think is more useful is to allocate the field with a NUL value (hex code 0x00) so that all the bits are turned off by default. To do this, simply make the following correction:
SELECT CONVERT([varbinary](MAX), REPLICATE(CONVERT(varchar(MAX), CHAR(0)), 10000))

Using integers and requiring multiplication vs. using decimals as a data type - what are your thoughts?

What are your thoughts on this? I'm working on integrating some new data that's in a tab-delimited text format, and all of the decimal columns are kept as single integers; in order to determine the decimal amount you need to multiply the number by .01. It does this for things like percentages, weight and pricing information. For example, an item's price is expressed as 3259 in the data files, and when I want to display it I would need to multiply it in order to get the "real" amount of 32.59.
Do you think this is a good or bad idea? Should I be keeping my data structure identical to the one provided by the vendor, or should I make the database columns true decimals and use SSIS or some sort of ETL process to automatically multiply the integer columns into their decimal equivalent? At this point I haven't decided if I am going to use an ORM or Stored Procedures or what to retrieve the data, so I'm trying to think long term and decide which approach to use. I could also easily just handle this in code from a DTO or similar, something along the lines of:
public class Product
{
// ...
private int _price;
public decimal Price
{
get
{
return (this._price * .01);
}
set
{
this._price = (value / .01);
}
}
}
But that seems like extra and unnecessary work on the part of a class. How would you approach this, keeping in mind that the data is provided in the integer format by a vendor that you will regularly need to get updates from.
"Do you think this is a good or bad idea?"
Bad.
"Should I be keeping my data structure identical to the one provided by the vendor?"
No.
"Should I make the database columns true decimals?"
Yes.
It's so much simpler to do what's right. Currently, the data is transmitted with no "." to separate the whole numbers from the decimals; that doesn't any real significance.
The data is decimal. Decimal math works. Use the decimal math provided by your language and database. Don't invent your own version of Decimal arithmetic.
Personally I would much prefer to have the data stored correctly in my database and just do a simple conversion every time an update comes in.
Pedantically: they aren't kept as ints either. They are strings that require parsing.
Philisophically: you have information in the file and you should write data into the database. That means transforming the information in any ways necessary to make it meaningful/useful. If you don't do this transform up front, then you'll be doomed to repeat the transform across all consumers of the database.
There are some scenarios where you aren't allowed to transform the data, such as being able to answer the question: "What was in the file?". Those scenarios would require the data to be written as string - if the parse failed, you wouldn't have an accurate representation of the file.
In my mind the most important facet of using Decimal over Int in this scenario is maintainability.
Data stored in the tables should be clearly meaningful without need for arbitrary manipulation. If manipulation is required is should be clearly evident that it is (such as from the field name).
I recently dealt with data where days of the week were stored as values 2-8. You can not imagine the fall out this caused (testing didn't show the problem for a variety of reasons, but live use did cause political explosions).
If you do ever run in to such a situation, I would be absolutely certain to ensure data can not be written to or read from the table without use of stored procedures or views. This enables you to ensure the necessary manipulation is both enforced and documented. If you don't have both of these, some poor sod who follows you in the future will curse your very name.