Copy large record data from Microsoft SQL Server 2008 R2 at once - sql

Client need with round 500,000 data in csv/xlsx format for every two month, which we currently follow the procedure by manually copy 60,000 records one time in csv/xlsx then download after passing multiple servers and finally merge all csv/xlsx in one csv, their is also chance of error while compiling all data.
We need shortcut for copying large amount data(round 500,000 records) from Microsoft SQL Server 2008 R2 in one time.
Please suggest better way.
Thanks

Related

Migrating legacy data from SQL Server 2000 to 2019 , log block error - is there a painless way of moving over tables with autoinc identity columns? [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.
I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.

find table with most rows in entire sql 2016 instance

I was trying to solve this specific issue:
I needed a single query that will retrieve the table in the entire instance with most rows. (I would run it in 20 sql instances, all sql 2016.)
It would be run in production so; ideally, it should not be a resource-consuming SP…
All other solutions I saw were… either for sql 2005, or they just retrieved it for a single database.

How to left join Excel table to a table in SQL Server database

I have a table (tbl1) in an Excel file with about 70k rows. I have linked that table into Power Pivot. There is another table (tbl2) in SQL Server with millions of rows that I need to left join to the table in my Excel file on
tbl1.[Member Number] = tbl2.[memid]
What query should I use to do it without having to import the whole tbl2 from SQL Server (throws error on Power Pivot due to memory constraints)?
Preferred environment is Power Pivot, but I do have SQL Sever Management Studio. I don't have WRITE permission in the server where tbl2 is located. I do however have WRITE access in a different server.
Thank you!
Import the Excel file into the SQL server where you have WRITE access, do the join there and import the data from this server. Any problem you see with this approach?
A few options spring to mind:
Get more memory/64 bit Excel and use PowerPivot as you are currently trying to do. If you are working on 32bit excel then you are effectively constrained to using 1gb of RAM whereas on 64bit you can use everything you've got. 64bit Powerpivot can potentially deal with hundreds of millions of records.
Read the SQL data and your csv into R, do the join and either write the output to your WRITE db or save as CSV to feed into PowerPivot. Although R has a steep initial learning curve, doing this kind of thing with the dplyr library is straightforward.
Dump your SQL table to csv and read both that your current csv into the WRITE db and do the join in your PowerPivot SQL query.
Which works best probably depends on the skills you have and how often you are doing this. I'd probably go down the 'R' route as you can set it up as a scheduled job.

After access to sql migration using SSMA only 100 rows are being returned

I migrated an access db to SQL using SSMA. My application still contacts the Access .mdb file and returns the record.
before migration : the table which is being used to populate data in the UI has 1000 rows and displays 1000 rows if the application uses the non migrated .mdb file.
post migration: the mdb file which is used by the application returns only 100 records..I tried by having exactly 100 records it worked. but then I tried with 101, which started throwing connectivity error.
How to handle this. Does SSMA have any restriction on the number of records to be returned after migration?
very urgent.Any help is greatly appreciated.
I suggest that use SSIS in order to convert your data from access to SQL server.
If you don't want to use SSIS I suggested that copy date from your access database and paste in Excel file, and then convert your data from Excel to SQL server ( I think that column data type or column size cause that you can't convert from Access to SQL Server).

Bulk INSERT into SQL Server CE

I am using WebMatrix for a site right now, and its built-in SQL Server Compact database, and it's alright, but it only lets you create one row at a time. It has no bulk insert features (as I expected). But, see I have tens of thousands of rows in a spreadsheet.
I used to use Navicat for SQL Server which let me define a table name, then it would automatically IMPORT the spreadsheet into a table! Tens of thousands of rows, All within about 30seconds. How can I get Navicat for SQL Server to connect to WebMatrix's database for my website so I can do mass-bulk-inserts?
I have a Bulk Insert library, that you may be able to use: http://sqlcebulkcopy.codeplex.com