I want to copy tables from one database to another database.
I have gone through google and find out that we can do this with Wizard option of Tools Menu in Spoon.
Currently I am trying to copy just one table from one database into another table.
My table has just 130 000 records and it took 10 mins to copy table.
Can we improve this loading timings? I mean just to copy 100k records, it should not take more than 10 seconds.
Try the mysql bulk loader - note: that is linux only
OR
fix the batch size:
http://julienhofstede.blogspot.co.uk/2014/02/increase-mysql-output-to-80k-rowssecond.html
You'll get massive improvements that way.
Related
I'm consolidating the information of 7 SQL databases into one.
I've made a SSIS package and used Lookup transformation and I managed to get the result as expected.
The problem: 30 million rows. And I want to perform a daily task to add to the destination table the new rows in the source tables.
So it takes like 4 hours to execute the package...
Any suggestion ?
Thanks!
I have only tried full cache mode...
So, I'm working on a SAP HANA database that has 10 million records in one table and there are 'n' number of tables in the db. The constraints that I'm facing are:
I do not have write access to the db.
The maximum RAM in the system is 6 GB.
Now, I need to extract the data from this table and save it as a csv or txt or excel file. I tried Select * from query. Using this the machine extracts ~700k records before showing an out of memory exception.
I've tried using LIMIT and OFFSET in SAP HANA and it works perfectly, but it takes around ~30 mins for the machine to process ~500k records. So, going by this route will be very time consuming.
So, I wanted to known if there is anyway by which I can automate the process of selecting 500k records using LIMIT and OFFSET and save each such sub-file containing 500k records automatically into as a csv/txt file on the system, so that I can run this query and leave the system overnight to extract data.
I have a very large file of 4 million+ records that I want to run an sql query on. However, when I open the file it will only return 1 million contacts and not load the rest. Is there a way for me to run my query without opening the file so I do not lose any records? PS I am using a Macbook so some functions and add ins are not available for me.
I am new to SSIS, i got a task to archive the data from production to Archive DB and then delete the data from production keeping 13 months of data in production. There is around 300+ tables of these i have to archive around 50 tables. Out of these 50 table 6 tables have size around 1 TB.
Listing out the the 2 methods which we are planning.
1. Using 50 data flow tasks in a sequence container.
2. Using SELECT * FROM...INSERT INTO.. where table name and column name can be stored in some configuration table and through loop we can archive the data.
Which will be the better option?
Is there any other better method so please let me know.
What precautions(Performanec tips) i have to take while doing the archive process so that it should not affect the Production server?
Please give your suggestion
Thanks
I have 150Million records with 300 columns (nchar),I am running a script to import data to the database database but it is always stopping when it gets to 10Million..
Is there a MSSQL setting that controls how many records can be on a table? What can it be making it stop at 10Million?
Edit:
I have run the script multiple times and it has been able to create multiple tables, but they all max at the same 10million records
depends on available storage. The more available storage you have, the more rows can you have in a table