Unable to allocate new pages in table space "XXX" IBM DB2 SQL Replication - sql

Product: IBM DB2
OS: Windows 2008 R2
I am trying to perform SQL replication on my database, I have created the capture tables, while I am trying to register the tables I got the following error message
[IBM][CLI Driver][DB2/NT64] SQL0289N Unable to allocate new pages in table space "xxxxxxx". SQLSTATE=57011"
Thanks In advance.

I suggest you to check the extent size of your tablespace. Run the following command SELECT TBSPACE, OWNER, EXTENTSIZE FROM SYSCAT.TABLESPACES This will give details of EXTENTSIZE of TABLESPACE. You may need to change the extent size depending on your requirements.
An extent is a block of storage within a table space container. It represents the number of pages of data that will be written to a container before writing to the next container. When you create a table space, you can choose the extent size based on your requirements for performance and storage management. See more details here

Related

SQL: How to alter all varchar column in all tables to nvarchar?

I need to convert all varchar columns in about 40 tables (filled with the data) to nvarchar columns. It is planned to happen in a dedicated MS SQL server used only for the purpose. The result should be moved to Azure SQL.
Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?
According to Remus Rusanu's answer https://stackoverflow.com/a/8157951/1346705, new nvarchar columns are created in the process, and the old varchar columns are dropped. The space can be reclaimed by DBCC CLEANTABLE or using ALTER TABLE ... REBUILD. Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?
Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?
Context: We are the 3rd party with respect to the enterprise information system. Our product reads from the information system SQL database and presents the data the way that would otherwise be expensive to implement in the IS. The enterprise information system is now migrated to the new version and is to be run on Azure SQL. The database of the IS have been changed heavily, and one of the changes was to abandon the old 8-bit text encoding (varchar) and to use Unicode instead (nvarchar). Our system was used also for collecting data typed manually -- using the same encoding that the old IS used.
Migration is to be done via doing old version of backup (SqlCmd that produces xxx.bak files), restoring on another good old SQL server. Then we run the script that removes all the tables, views, and stored procedures that can be reconstructed from the IS. One of the main reasons is that the SQL code uses features that are not accepted by the new backup tool SqlPackage.exe to produce xxx.bacpac file. Then the bacpac file is restored in Azure SQL.
Where should be the conversion done: on the old SQL, or after moving it on Azure SQL Server?
I would do it on local SQLServer First,Running this on Azure database,might cause you to run into some issues like hitting your DTU limits,disk IO throttling..
Are the dropped varchar columns packed into the backup table, or does the backup/restore also remove the dropped columns?
The space wont be released back to filesystem,also backup doesn't process free spaces,so you will not see much change there.You might want to read more on dbcc cleantable though,before proceeding ..
Can the process be somehow automated using a universal SQL script? Or is it necessary to write the script for each individual table?
It can be automated,may be you can use dynamic sql to see the column type and process further.You will also have to see if any of those columns are part of indexes,if so you have to drop them first
I suggest making the schema changes beforehand on the old instances. Even if you don't bother cleaning up space with DBCC CLEAANTABLE or ALTER...REBUILD, the resultant bacpac size will be the same because, unlike a physical backup/restore, a bacpac file is just a compressed package format of schema and data.
Consider using SQL Server Data Tools (SSDT) to facilitate the schema changes. This will consider all the dependencies (constraints, indexes, etc.) that is a challenge with a "universal" T-SQL solution. SSDT will generally generate a migration script that employs temp tables for such schema changes so the end result won't have wasted space in your old database. However, you will need sufficient unused space in the database to contain the old/new objects side-by-side.

The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions

We are using Azure SQL Database with 32 GB Storage. I have checked the storage utilization from Azure portal and it is showing me 68% used (22 GB). However, when I am trying to create an Index, I am getting the bellow error.
The database has reached its size quota. Partition or delete data,
drop indexes, or consult the documentation for possible resolutions.
Thanks.
Make sure the database does not have a maximum size set with the following query. This size limit is independent of the size limit of the tier.
SELECT DATABASEPROPERTYEX('db1', 'MaxSizeInBytes') AS DatabaseDataMaxSizeInBytes
You can change that max size to the limit of the current tier the database is using.
ALTER DATABASE CURRENT MODIFY MAXSIZE=100MB
Verify also the Azure SQL is not reaching the maximum size for the TempDB. Please visit this documentation to see the current limit for the service tier the database is using.

Approximate disk space consumption of rows on SQL Server

I'd like to understand, what causes the size of a SQL Server 12 database. The mdf has 21.5 GB. Using the "Disk Usage by Top Tables" report in SQL Server Management Studio, I can see that 15.4 GB are used by the "Data" of one table. This table has 1,691 rows of 4 columns (int, varchar(512), varchar(512), image). I assume the image column is responsible for most of the consumption. But
Select (sum(datalength(<col1>)) + ... )/1024.0/1024.0 as MB From <Table>
only gives 328.9 MB.
What might be the reason behind this huge discrepancy?
Additional information:
For some rows the image column is updated regularly.
This is a screenshot of the report:
If we can trust it, indices or unused space should not be the cause.
Maybe you are using a lot of indexes per table, these will all add up. Maybe your auto-grow settings are wrong.
The reason was a long running transaction on another unrelated database (!) on the same SQL Server instance. The read committed snapshot isolation level filled the version store. Disconnecting the other application reduced the memory usage to a sensible amount.

Is temp DB scalable in azure SQL DB

I’m trying to bulk import a very large table (75G) into the azure SQL DB (pricing tier P6 premium 1000 DTUs), it failed with the following error message
“Msg 40544, Level 17, State 2, Line 179
The database ‘tempdb’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.”
I looked up a few blogs where they are suggesting an increase in the tier. I was wondering if I could just scale up tempdb without having to increase the tier itself.
Right now I’m chunking up the file into smaller volumes to load it but if I have to build index on this table after loading, I’m pretty sure it would fail again with the same error message.
Any thoughts??
No. You have no direct control over TempDB and it's behavior. However, as you scale up your service tier, it's my understanding that your thresholds within TempDB also go up.

How to add or route PostgreSQL Data to New Hard Drive

Im Using Windows Server 2008 R2 Standard
Im Running PostgreSQL 9.0.1, compiled by Visual C++ build 1500, 32-bit
I got C:/ and D:/ Drive
C:/ --> 6.7GB free space (almost full and my server performance running low)
D:/ --> 141GB free space
Currently my PostgreSQL Data stored at C:/ Now,I want to route or add path to D:/ without migrate the data from C:/ to D:/ because now my PostgreSQL Data Stored around 148 GB. It Heavy and Massive Stored.
If success, I should still be able to do a query like SELECT * From table_bla_bla and it will return result from both drives?
Please do not suggest me to change PostgreSQL to other DB or whatsoever.
Because Im running 39,763 Device GPS Meter that send the data to my Server.
I have to take care this server because my expert already past-away.
You need to use tablespaces.
Create the tablespace, for example CREATE TABLESPACE second_drive LOCATION 'D:/postgresdata/' (see this other answer if you get permission denied errors)
ALTER TABLE table_bla_bla SET tablespace second_drive
Tablespaces allow you to decide which tables go on which drives and that can help speed up performance by ensuring you control where reads and writes go, but it also helps with space.
Postgres places individual tables in TABLESPACEs (which relate to a single disk), which is enough if you have multiple tables and you can achieve what you need by moving some tables to the other disk.
On the other hand, if you have a large table that you need to split over multiple disks, you need to use Postgres's Horizontal Partitioning capability.
This builds on tablespaces by allowing you to create a master table table_bla_bla which is actually just a facade on top of two or more tables which actually hold the data. These data tables can then be put on different tablesspaces effectively splitting your data over disks.
For this you would:
Rename your current table_bla_bla to something like
table_bla_bla_c
Create a new table_bla_bla master table.
Alter table_bla_bla_c to mark that it inherits from
table_bla_bla
Create a new table_bla_bla_d table that inherits from table_bla_bla and specify the tablespace as the D drive.
Apply partitioning triggers and check constraints as per the partitioning documentation
Once this is in place, you can arrange it so that any inserts into table_bla_bla cause new records to be created on the D drive. Selects on table_bla_bla will read from both disks.