Database size much more than database stats [closed] - sql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to export a database from PHPMyAdmin in SQL format.
I checked the database stats and it showed the size of the database as 285 MB. I started to download it, but it has already crossed 500 MB but no sign of download completion.
What could be the reason for this?

exporting to SQL converts the data to text, and add additional text between fields and row.
For example, a TINYINT takes 1 byte storage, but as SQL it takes 1-4 bytes, ('0' -> '-127')

Whily mysqldump exports data to a file it doesn't operate with raw binary content. It creates SQL requests to create your database from the stretch and fill it with inserts.
So your dump contains test SQL requests (CREATE, INSERT, etc.), comments, connections settings commands, etc. All your binary data are represented as strings as well.
That's why your dump file much bigger than actual data size in the db.

Related

Which file type is better for importing data into SQL Server: CSV or JSON? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am taking part in a project where a third-party company will provide us with an export of our customer data so that we can import them into our in-house system.
Each customer record has about twenty fields. The data-types are strings, booleans, integers and dates (with time components and UTC offset components). None of the strings are longer than 250 chars. The integer can range from 0 to 100,000 inclusive.
I have to import about 2 million users into a SQL Server database. I am in the planning phase and trying to determine if I should ask for the export file in csv or json. I am planning on asking for both (just in case), but I don't yet know if I can.
If I can only pick one file-type (csv or json), which is better for this kind of work? Can anyone with experience importing data into SQL Server provide any advice on which is better?
The same are fast if you use the bulk method.
From SQL Server 2016 the json is natively supported and you can manipulate it easily with JSON function.
You can also import file directly via T-SQL with OPENJSON
and OPENROWSET BULK IMPORT. Alternately you can put the T-SQL above into SSIS package.
See this article for more details:
https://www.sqlshack.com/import-json-data-into-sql-server/

size of database after restored from sql dump [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
i have 5 gb PostgreSQL dump file. I will restore it by psql command but I have no space on my computer (about 1 gb). I want to know, will the database take over than or equal to 5 gb?
A SQL dump is typically a lot smaller than the restored database, because it only contains the definition of indexes, not the actual index data. So you should expect the database to need at least 5GB after being restored. If it contains a lot of indexes it might be substantially be bigger.
The only situation where a SQL dump might be bigger than the restored size is, if the dump contains a lot of text data that is longer than approximately 2KB. Any text value exceeding that size will automatically be compressed. But still it's very unlikely that the restored size will be 5 times smaller than the dump.

Transaction log gets full [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
We are on SQL Server 2016. Our recovery mode is FULL. Auto-growth is set to 4GB.
Drive size is 1TB. Transaction log backup frequency is 2 hours.
We have an issue with the transaction log getting full very frequently. Our data size is approximately 1.2TB.
Can someone please suggest what we could do to get rid of this issue. Any additional setting that we could change or check for?
PS: I'm a beginner in this field, so would appreciate any kind of help.
Thanks.
The log must be sized to accommodate all activity between log backups at a minimum. The log backup frequency should be driven by your recovery point objective (RPO), which is the maximum acceptable data loss as defined by the business.
However, you may need to schedule log backups more frequently to keep the transaction log size reasonable. 2 hours is apparently not often enough in your environment so you need to either increase the log size or increase the log backup frequency.

CSV or SQL for a small site [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
If I wanted to run a small personal site that added, say, 2000 rows of data (150 kb) every hour, would there be any significant difference between using a CSV file or SQL database? I am very new to databases and currently have a prototype that appends data to a CSV file for simplicity, but I would like to know if there are any downsides in speed or memory. I will only need write and lookup. Also, if there is a large amount of redundant data, will a relational database be able to store or detect this efficiently? I do not fully understand the concept.
Edit: this question is not a duplicate of my other question. The other concerns an interchange format that should work between a server and a website, while this question is about a method to store data as a flat file or database.
A CSV is a sequential text file, so lookups will be O(n). That is, it wil take 10x longer to lookup in a file with 10,000 lines than one with 1000.
For this reason, id recommend a SQL database, as they have built in indexing features. You can use something like Access or SQLlite for next to nothing.
The only real downside to a SQL database is that you have to learn how to use it.
So, sql has several features that you need to imlement when using CSV.
CSV won't let you create indexes for fast searching.
If you always need all data from a single table (like for application settings), CSV is faster, otherwise not.
What are some disadvantages?
No indexing
Cannot be partitioned
No transactions
Cannot have NULL values
As per your case as you have large data....its better to go with database rather than using csv.
You can create constraints like unique key constraints to uniquely identify data....There are several features that trival CSV flat file will not support.

Copying a Live Database without Log File [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a SQL Server 2008 Database running on Express. I want to copy only the schema and data to an exact copy of this database. How can I do it and what are the steps involved?
My original database has a huge log file, so i do not want to copy that.
P.S: I do undersand that since this is a LIVE database, there could some amount of live data that will not be copied. I am OK with that.
You could use the "Generate Scripts" option to create a sql file with all of your schema and data. See this MS article: http://msdn.microsoft.com/en-us/library/ms178078.aspx