I'm running Oracle 11g XE on Debian Linux and I really need to know when "user data" is come close to 11 Gb.
The maximum amount of user data in an Oracle Database XE database cannot exceed 11 gigabytes. If the user data grows beyond this limit, then an ORA-12592 error will appear.
(c) docs.oracle.com
Few questions here:
what is user data exactly?
Which tablespaces is count as userdata?
Does system tablespaces like sysaux counts as user data?
Does separate files like archived redo logs counts as userdata?
Thank for help, guys, I'm really confused.
User data is the persistent data your application creates and uses, as distinct from the metadata the database generates itself (such as the data dictionary).
The tablespaces you need to monitor are USERS and any other tablespace you have created. The tablespaces SYS and SYSAUX are reserved for the database's own data and so don't count; TEMP and UNDO (or whatever else you call your temporary and rollback tablespaces) don't count either.
Redo logs and other files are external to the database and so don't count either.
Related
I want to create tablespace in ASM in Oracle rac db create tablespace <tablespacexxx> DATAFILE '+data';. data is one disk storage. I saw there are multiple disk groups when executing select * from V$ASM_DISKGROUP;
There is one disk group called Data and another group called Reco. The voting_files column for the two groups is different. voting_files for Data is set to y, and voting_files for Reco is set no n. I am wondering what are the differences between the two, and can I use either one to create tablespaces?
Use +DATA for your datafiles. +RECO or +FRA should be for the fast recovery area, online redo logs, voting files, and control files.
The Fast Recovery Area is Oracle-managed disk space that provides a
centralized disk location for backup and recovery files.
https://docs.oracle.com/en/database/oracle/oracle-database/19/haovw/oracle-database-configuration-best-practices.html#GUID-DB511B6A-D220-4556-B31B-18E3D310BA61
https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/create-diskgroups.html#GUID-CACF13FD-1CEF-4A2B-BF17-DB4CF0E1800C
According to the disk group names, DATA is supposed to be contain user data, and RECO is to store the recovery logs. It more like an agreement among the developers and DBAs.
So, your new tablespace needs to placed at DATA.
Voting disks manage information about RAC node membership. You should not use it store any user data.
I have a SAP B1 system that's being migrated from Microsoft SQL to HANA DB. Our solution in the staging environment is producing huge transaction logs, tens of gigabytes in an hour, but the system isn't receiving production workloads yet. SAP have indicated that the database is fine, and that it's our software that's at fault, but I'm not clear on how to identify this. As far as I can tell each program is sleeping between poll intervals, and the intervals are not high (one query per minute). We just Traced SQL for an hour, and there were only in the region of 700 updates, but still tens of gigabytes of transaction log.
Does anybody have an idea how to debug the transaction log? - I'd like to see what's being recorded.
Thanks.
The main driver of high transaction log data is not the number of SQL commands executed but the size/number of records affected by those commands.
In addition to DML commands (DELETE/INSERT/UPDATE) also DDL commands like CREATE and ALTER table produce redo log data. For example, re-partitioning a large table will produce a large volume of redo logs.
For HANA there are tools (hdblogdiag) that allow inspecting the log volume structures. However, the usage and output of this (and similar tools) absolutely require extensive knowledge of the internals of how HANA operates redo logs.
For the OPs situation, I recommend checking for the volume of data changes caused by both DML and DDL.
We had the same issue.
There is a bug in SAP HANA SPS11 < 112.06 and SPS12 < 122.02 in the LOB garbage collector for the row store.
You can take a look at the SAP Note 2351467
In short, you can either
upgrade HANA
or convert the rowstore tables containing LOB columns into columnstore with the query ALTER TABLE "<schema_name>"."<table_name>" COLUMN;
You can find the list with this query :
select
distinct lo.schema_name,
lo.table_name
from sys.m_table_lob_files lo
inner join tables ta on lo.table_oid = ta.table_oid
and ta.table_type = 'ROW'
or disable the row store lob garbage collector by editing the indexserver.ini to set "garbage_lob_file_handler_enabled = false" under the [row_engine] section.
I'm working on developing a fast way to make a clone of a database to test an application. My database has some specif tables that are quite big (+50GB), but the big majority of the tables only have a few MBs. On my current server, the dump + restore takes some hours. These bigs tables have date fields.
With the context in mind, my question is: Is possible to use some type of restrictions on table rows to select the data that is being dumped? e.g. On table X only dump the rows that date is Y.
If this is a possible show can I do it? if it's not possible what would be a good alternative?
You can use COPY SELECT whatever FROM yourtable WHERE ... TO '/some/file' to limit what you export.
COPY command
You could use row level security and create a policy that lets the dumping database user see only those rows that you want to dump (make sure that that user is neither a superuser nor owns the tables, because these users are exempt from row level security).
Then dump the database with that user, using the --enable-row-security option of pg_dump.
Im Using Windows Server 2008 R2 Standard
Im Running PostgreSQL 9.0.1, compiled by Visual C++ build 1500, 32-bit
I got C:/ and D:/ Drive
C:/ --> 6.7GB free space (almost full and my server performance running low)
D:/ --> 141GB free space
Currently my PostgreSQL Data stored at C:/ Now,I want to route or add path to D:/ without migrate the data from C:/ to D:/ because now my PostgreSQL Data Stored around 148 GB. It Heavy and Massive Stored.
If success, I should still be able to do a query like SELECT * From table_bla_bla and it will return result from both drives?
Please do not suggest me to change PostgreSQL to other DB or whatsoever.
Because Im running 39,763 Device GPS Meter that send the data to my Server.
I have to take care this server because my expert already past-away.
You need to use tablespaces.
Create the tablespace, for example CREATE TABLESPACE second_drive LOCATION 'D:/postgresdata/' (see this other answer if you get permission denied errors)
ALTER TABLE table_bla_bla SET tablespace second_drive
Tablespaces allow you to decide which tables go on which drives and that can help speed up performance by ensuring you control where reads and writes go, but it also helps with space.
Postgres places individual tables in TABLESPACEs (which relate to a single disk), which is enough if you have multiple tables and you can achieve what you need by moving some tables to the other disk.
On the other hand, if you have a large table that you need to split over multiple disks, you need to use Postgres's Horizontal Partitioning capability.
This builds on tablespaces by allowing you to create a master table table_bla_bla which is actually just a facade on top of two or more tables which actually hold the data. These data tables can then be put on different tablesspaces effectively splitting your data over disks.
For this you would:
Rename your current table_bla_bla to something like
table_bla_bla_c
Create a new table_bla_bla master table.
Alter table_bla_bla_c to mark that it inherits from
table_bla_bla
Create a new table_bla_bla_d table that inherits from table_bla_bla and specify the tablespace as the D drive.
Apply partitioning triggers and check constraints as per the partitioning documentation
Once this is in place, you can arrange it so that any inserts into table_bla_bla cause new records to be created on the D drive. Selects on table_bla_bla will read from both disks.
I am trying to get the size of all tablespaces on an Oracle database.
I have tried 2 ways of getting the info.
Option 1 gives me good values, but it doesn't give return any values when the tablespace doesn't have a database file. Specifically, it isn't returning any values for the "TEMP" and "TESTTODELETE" tablespaces.
The only differences I've noticed between these 2 tablespaces and the others are that these 2 don't have .dbf files.
Option 2 gives me the correct values for some tablespaces, but is totally off other times. Option 2 does return something for the "TEMP" tablespace, but it doesn't return anything for the "TESTTODELETE" tablespace.
Both options don't return any information for the tablespace "TESTTODELETE".
What is the best way to get the tablespace total size in MB for all tablespaces so that it reflects what is displayed in the Enterprise Manager?
dba_data_files contains just that, tablespaces that have data files. If you need to include tablespaces that aren't backed by data files (e.g. temp tablespaces), you'll need to include dba_temp_files in your query. They have almost exactly the same column layout so it should be easy to UNION them together.
As for dba_tablespace_usage_metrics, the capacity it reports is the maximum size for a tablespace. If a tablespace has autoextend enabled it will include that in the calculation. Furthermore, I don't recommend relying on the v$parameters to determine block size. Instead, join to dba_tablespaces because each tablespace can have its own block size.