Divide a SQL file into smaller files and save each of them as a CSV/Excel/TXT file dynamically - sql

So, I'm working on a SAP HANA database that has 10 million records in one table and there are 'n' number of tables in the db. The constraints that I'm facing are:
I do not have write access to the db.
The maximum RAM in the system is 6 GB.
Now, I need to extract the data from this table and save it as a csv or txt or excel file. I tried Select * from query. Using this the machine extracts ~700k records before showing an out of memory exception.
I've tried using LIMIT and OFFSET in SAP HANA and it works perfectly, but it takes around ~30 mins for the machine to process ~500k records. So, going by this route will be very time consuming.
So, I wanted to known if there is anyway by which I can automate the process of selecting 500k records using LIMIT and OFFSET and save each such sub-file containing 500k records automatically into as a csv/txt file on the system, so that I can run this query and leave the system overnight to extract data.

Related

Improve ETL from COBOL file to SQL

I have a multiserver/multiprocess/multithreaded solution which can parse and extract over 7 million records from a 6gb EBCDIC Cobol file, to 27 SQL tables, all under 20 minutes. The problem; the actual parsing and extracting of the data really only takes about 10 minutes using bulk inserts into staging tables. It then takes almost another 10 minutes to copy the data from the staging tables to their final table. Any ideas on how I can improve the 2nd half of my process? I've tried using In-Memory tables but it blows out the SQL server.

How to run an SQL code on a very large CSV file (4 million+ records) without needing to open it

I have a very large file of 4 million+ records that I want to run an sql query on. However, when I open the file it will only return 1 million contacts and not load the rest. Is there a way for me to run my query without opening the file so I do not lose any records? PS I am using a Macbook so some functions and add ins are not available for me.

Database structure for avoiding data loss, deadlock & worst performance

Below Image is my database struture image.
I have 100+ sensors which are sending constant data to respective machines.
I have 5-7 different machines which is having different SQL express database installed in it.
All the machines will send its respective data to one SERVER.
Every second each mahcine will send 10 rows as a bulk data to server stored procedure
PROBLEM : managing large data coming from every machines to single server to avoiding deadlock / delay in performance.
Background Logic
Bulk data from machines are stored into temporary table
& than using that temp table i am looping through each record for processing.
Finally in sp_processed_filtered_data lots of insert & updates are present. & there are nested sp's for processing that filtered data.
Current Logic :
Step : 1
Every Machine will send data to SP_Manage stored procedure which consist of bulk data in XML format, which we are converting in SQL table format.
This data is row data. so we filter this data.
Let's say after filtering 3 rows are remained.
so I want to process that each row now.
Step : 2
As I have to process each rows I have to loop through each row to send data to SP_Process_Filetered_Data.
Now this SP is containing complex logic.
I am looping though each records & every machines will send data parallelly.
So I am afraid will it be causing data loss or dead lock.

Pentaho | Tools-> Wizard-> Copy Tables

I want to copy tables from one database to another database.
I have gone through google and find out that we can do this with Wizard option of Tools Menu in Spoon.
Currently I am trying to copy just one table from one database into another table.
My table has just 130 000 records and it took 10 mins to copy table.
Can we improve this loading timings? I mean just to copy 100k records, it should not take more than 10 seconds.
Try the mysql bulk loader - note: that is linux only
OR
fix the batch size:
http://julienhofstede.blogspot.co.uk/2014/02/increase-mysql-output-to-80k-rowssecond.html
You'll get massive improvements that way.

Is there a MAX size for a table in MSSQL

I have 150Million records with 300 columns (nchar),I am running a script to import data to the database database but it is always stopping when it gets to 10Million..
Is there a MSSQL setting that controls how many records can be on a table? What can it be making it stop at 10Million?
Edit:
I have run the script multiple times and it has been able to create multiple tables, but they all max at the same 10million records
depends on available storage. The more available storage you have, the more rows can you have in a table