size of database after restored from sql dump [closed] - sql

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 3 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Improve this question
i have 5 gb PostgreSQL dump file. I will restore it by psql command but I have no space on my computer (about 1 gb). I want to know, will the database take over than or equal to 5 gb?

A SQL dump is typically a lot smaller than the restored database, because it only contains the definition of indexes, not the actual index data. So you should expect the database to need at least 5GB after being restored. If it contains a lot of indexes it might be substantially be bigger.
The only situation where a SQL dump might be bigger than the restored size is, if the dump contains a lot of text data that is longer than approximately 2KB. Any text value exceeding that size will automatically be compressed. But still it's very unlikely that the restored size will be 5 times smaller than the dump.

Related

Bigquery cannot load all data when streaming [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed last month.
Improve this question
Hi i am streaming data from my apps to Bigquery by C++.
Thing got okay and the all are connectable, but the problem is the log file said there's 665 in streaming buffers
enter image description here
However, final records in the table is just 4. Does anyone know to solve this?
enter image description here
"Estimated rows" are only an estimate.
Streaming data in BigQuery is available in real-time (though table copy commands can take up to 90 minutes). I recommend reading this article for more information.
It sounds like you think you're losing data. That's not likely. I recommend checking what you believe is being inserted versus what's actually landing in the table.

Tempdb becomes too big in size [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I was executing one normal SP and after that my Tempdb acquired 80GB disk space which was only 8MB before. How can I overcome this? and Why this happened?
It happened because you did something in the SP that needed tempdb. Sorting under certain conditions, the dreaded DISTINCT that needds to know all data for example.
You can overcome this by rewriting your SQL not to use Tempdb. And the current resize you just fix (redefine size, restart server, tempdb is recreated).
Depending on the db, btw, I would NOT consider 80gb to be excessive on a decent modern server. Depends WHAT you do, obviously.

Which all reasons can cause index to go into unusable state in oracle [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
We are using oracle DB. In the "ALL_INDEXES" table for some of the indexes the status value is showing as "UNUSABLE". This we have observed when we move tables from compressed to uncompressed or vice versa. But we have not perform moving of the tables and still it showing for some of the indexes unusable. Can someone explain which all reasons are there.
We are not creating lists on SO, but let's say the question is "What may caused the index go unusable?". The idea is that anything that touches the table(partition) segment with a "bulk" ddl operation, invalidates the index.
For example, if you truncate a partition or drop it, the global index will be set unusable.
(Edit: Here we can count also sqlldr-ing with direct path as discussed here)
Another reason would be that someone set it unusable;

How can I convert cells containing than 15 digit number limitation to the real value [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I didn't realise there is a limit on the maximum number allowed in all versions of Excel (15 digits long max).
I've made an error by manually entering loads of values only to discover that I lose the accuracy after the first 15 digits which are replaced by zeroes in affect. It was nearly a 3 hour piece of work which it looks like I will have to repeat unless anyone know a way to help me.
Since then I've saved and exited my spreadsheet. Later on a customer came back to me to say that that the numbers I gave them are inaccurate as they only only go up to 15 digits for accuracy.
I then researched this on the net to find that I should have formatted the column as Text before copying in the number that is more than 15 characters.
Does anyone know if there is any way to get the numbers back or is will I have generate my spreadsheet all over again?
There is no way to get the numbers back. You will have to generate your spreadsheet again.

Why is PDF file size so small? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have a few copies of textbooks this semester on PDF. These are 1000 page computer science textbooks full of graphics. When I downloaded it, it took just a few seconds which was amazing, I thought something had gone wrong. The entire textbook was 9.7 MB. I opened it up and sure enough, the entire textbook was there, all images and everything were loaded instantly (and I have a really terrible internet connection)
I am just wondering what amazing compression technique allows you to store 1000 pages of a textbook in under 10 MB?
Here is a screenshot of the file properties, I am so baffled.
A typical text page is between 3k and 6k tokens. So the text of your 1000 page book would fit in 6MB even without compression.
Normal compression tools can reduce plain ASCII text with something like 60-80%.
So lets say it's 75%, then you need 0.25 x 6MB = 1.5MB for the text. That leaves 8.5 MB for the pictures.
For vector based images like svg that's a lot, they are small and compress as well as text. But 8.5 MB does not leave room for a lot of embedded bitmaps.