Does Oracle Apex support table partitioning? - sql

I'm new to Oracle and Apex, and I'm building a table that will potentially become very large over time.
It would be ideal to partition by date range, and then subpartition by hash.
Does Apex support table partitioning?
Googling yielded no results, which makes me think it's unsupported.
(All I could find was Oracle DB articles, not Apex.)

Apex is just an application layer on top of your Oracle database. It does not know about how a table is organised. I have many applications using partitioned tables, this is not an issue.

Related

How to insert R dataframe into existing table in SQL Server

After trying a few different packages and methods found online, I am yet to find a solution that works for inserting a dataframe from R into an existing table in SQL Server.
I've had great success doing this with MySQL, but SQL Server seems to be more difficult.
I have managed to write a new table using the DBI package, but I can't find a way to insert into using this method. Looking at the documentation, there doesn't seem to be a way of inserting.
As there are more than 1000 rows of data, using sqlQuery from the RODBC package also seems unfeasable.
Can anybody suggest a working method for inserting large amounts of data from a dataframe into an existing SQL table?
I've had similar needs using R and PostGreSQL using the r-postgres-specific drivers. I imagine similar issues may exist with SQLServer. The best solution I found was to write to a temporary table in the database using either dbWriteTable or one of the underlying functions to write from a stream to load very large tables (for Postgres, postgresqlCopyInDataframe, for example). The latter usually requires more work in terms of defining and aligning SQL data types and R class types to ensure writing, wheres dbWriteTable tends to be a bit easier. Once written to a temporary table, to then issue an SQL statement to insert into your table as you would within the database environment. Below is an example using high-level DBI library database calls:
dbExecute(conn,"start transaction;")
dbExecute(conn,"drop table if exists myTempTable")
dbWriteTable(conn,"myTempTable",df)
dbExecute(conn,"insert into myRealTable(a,b,c) select a,b,c from myTempTable")
dbExecute(conn,"drop table if exists myTempTable")
dbExecute(conn,"commit;")

Performance issues with outer joins to view in Oracle 12c

Two of my clients have recently upgraded to Oracle 12c 12.1.0.2. Since the upgrade I am experiencing significant performance degradation on queries using views with outer joins. Below is an example of a simple query that runs in seconds on the old Oracle 11g 11.2.0.2 database but takes several minutes on the new 12c database. Even more perplexing, this query runs reasonably fast (but not as fast) on one of the 12c databases, but not at all on the other. The performance is so bad on the one 12c database that the reporting I've developed is unusable.
I've compared indexes and system parameters between the 11g and two 12c databases, and have not found any significant differences. There is a difference between the Execution Plans, however. On 11g the outer join is represented as VIEW PUSHED PREDICATE but on 12c it is represented as a HASH JOIN without the PUSHED PREDICATE.
When I add the hint /*+ NO_MERGE(pt) PUSH_PRED(pt) */ to the query on the 12c database, then the performance is within seconds.
Adding a hint to the SQL is not an option within our Crystal Reports (at least I don't believe so and also there are several reports), so I am hoping we can figure out why performance is acceptable on one 12c database but not on the other.
My team and I are stumped at what to try next, and particularly why the response would be so different between the two 12c databases. We have researched several articles on performance degradation in 12c, but nothing appears particularly applicable to this specific issue. As an added note, queries using tables instead of views are returning results within an acceptable timeframe. Any insights or suggestions would be greatly appreciated.
Query:
select pi.*
, pt.*
from policyissuance_oasis pi
, policytransaction_oasis pt
where
pi.newTranKeyJoin = pt.polTranKeyJoin(+)
and pi.policyNumber = '1-H000133'
and pi.DateChars='08/10/2017 09:24:51' -- 2016 data
--and pi.DateChars = '09/26/2016 14:29:37' --2013 data
order by pi.followup_time
as krokodilko says, perform these :
explain plan for
select pi.*
, pt.*
from policyissuance_oasis pi
, policytransaction_oasis pt
where
pi.newTranKeyJoin = pt.polTranKeyJoin(+)
and pi.policyNumber = '1-H000133'
and pi.DateChars='08/10/2017 09:24:51' -- 2016 data
--and pi.DateChars = '09/26/2016 14:29:37' --2013 data
order by pi.followup_time;
select * from table(dbms_xplan.display());
and then, you probably see this at the bottom of the plan :
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
there,
dynamic sampling
concept should be the center of concern for performance problems(level=2 is the default value, ranges between 0-11 ).
In fact, Dynamic sampling (DS) was introduced to improve the optimizer's ability to generate good execution plans. This feature was enhanced and renamed Dynamic Statistics in Oracle Database 12c. The most common misconception is that DS can be used as a substitute for optimizer statistics, whereas the goal of DS is to augment optimizer statistics; it is used when regular statistics are not sufficient to get good quality cardinality estimates.
For serial SQL statements the dynamic sampling level is controlled by the
optimizer_dynamic_sampling
parameter but note that from Oracle Database 12c Release 1 the existence of SQL plan directives can also initiate dynamic statistics gathering when a query is compiled. This is a feature of adaptive statistics and is controlled by the database parameter
optimizer_adaptive_features (OAF)
in Oracle Database 12c Release 1 and
optimizer_adaptive_statistics (OAS)
in Oracle Database 12c Release 2.
In other words, from Oracle Database 12c Release 1(we also use db12.1.0.2 at office) onwards, DS will be used if certain adaptive features are enabled by setting the relevant parameter to TRUE.
Serial statements are typically short running and any DS overhead at compile time can have a large impact on overall system performance (if statements are frequently hard parsed). For systems that match this profile, setting OAF=FALSE is recommended( alter session set optimizer_adaptive_features=FALSE notice that
you shouldn't alter system but session
).
For Oracle Database 12c Release 2 onwards, using the default OAS=FALSE is recommended.
Parallel statements are generally more resource intensive, so it's often worth investing in additional overhead at compile time to potentially find a better SQL execution plan.
For serial type SQL statements, you may try to manual set the value for optimizer_dynamic_sampling(assuming that there were no relevant SQL plan directives). If we were to issue a similar style of query against a larger table that had the parallel attribute set we can see the dynamic sampling kicking in.
When should you use dynamic sampling? DS is typically recommended when you know you are getting a bad execution plan due to complex predicates. But shouldn't be system-wide as i mentioned before.
When is it not a good idea to use dynamic sampling?
If the queries compile times need to be as fast as possible, for example, unrepeated OLTP queries where you can't amortize the additional cost of compilation over many executions.
As the last word, for your case, it could be beneficient to set
optimizer_adaptive_features parameter to FALSE for individual SQL
statements and see the gained results.
We discovered the cause of the performance issue. The following 2 system parameters were changed at the system level by the DBAs for the main application that uses our client's Oracle server:
_optimizer_cost_based_transformation = OFF
_optimizer_reduce_groupby_key = FALSE
When we changed them at the session level to the following, the query above that joins the 2 views returns results in less than 2 seconds:
alter session set "_optimizer_cost_based_transformation"=LINEAR;
alter session set "_optimizer_reduce_groupby_key" = TRUE;
COMMIT;
Changing this parameter did not have an impact on performance:
alter session set optimizer_adaptive_features=FALSE;
COMMIT;
We also found that changing the following parameter improved performance even more for our more complex views:
alter session set optimizer_features_enable="11.2.0.3";
COMMIT;

Adding transformations in Sybase replication server

We have primary source database as Oracle 11gR2 and target as SAP HANA. We are trying to test SAP - Sybase Replication server for replication from Primary ORACLE to Target HANA.
We need to add extra columns such as RECORD_DATE and LAST_MODIFIED_DATE to HANA tables. Is it possible to add Transformations or extra columns to target tables which are not present in Primary Database.
Best Regards
are you thinking of adding these fields during replication.
or want to merge them after replication. If after replication you want to merge them simply just go to Hana Studio and make an Information view to get the the merged or simply joined data from different tables.
and if that table is not present in the the source System then instead of replication make a Excel flat file and import it into Hana using the Import option on the RHS of hana studio.
and The only way to Alter a table definition in Hana is by Using the Alter Table SQL statement no other Shortcuts. Or just import and make a join.
I'm assuming you want to capture auditing data for records inserted/updated by the Repserver maintenance user (in the HANA database).
While the column default (for inserts; as discussed with Shivam) will work, for updates you've got a few options:
an update trigger on the HANA table [I don't work with HANA so I don't know if this is doable]
defining the update column as a (materialized) computed column, with the associated function being responsible for obtaining the current date/time when other columns in the table are modified [while this is doable in Sybase ASE, I don't know if this is doable in HANA]
(in repserver) create a custom function string for the rs_update function on this table which emulates a standard rs_update function string with the addition of an update of LAST_MODIFIED_DATE = getdate() (replace getdate() with HANA's equivalent of the current date/time) [there are a couple different ways to do this depending on SRS version, what's doable with HANA-specific function strings, and personal preference - a bit much to go into at this point if a custom function string is going to be out of the question or you've already got an acceptable solution]

Can LogParser output to Azure SQL?

Azure SQL tables require a clustered index and will not accept insertions if one is not present. If one is present LogParser is complaining about a mismatch on the number of columns in the select list vs. the target table.
Is there any way to square this circle? Perhaps embed an expression in LP's select list like '"SELECT DateTime,Thread,Level,Logger,Message,Exception,(select max(id)+1 from loggerTbl)...
It's becoming amazingly difficult to parse plain old azure logs into sql where god intends them to be.
Azure SQL tables require a clustered index and will not accept insertions if one is not present.
That is no longer true. Azure SQL DB v12 no longer has this restrictions. Move your DB to one of the new tiers (Basic, Standard) and your DB will be upgraded to v12.

Communicating with Informix from SQL Server

Right... I've got a program I'm doing some maintenance on.
Urgh. Even describing it makes me shudder... Right, okay.
Every night, a database running on what we think is SQL Server 2000 hooks up to an Informix database and copies it over into SQL Server.
The Informix/SQL data is accessed by the program I'm maintaining, which then stores some data in a different SQL Server 2000 database. This data should have foreign key constraints on the Informix data, but doesn't.
Further on down the line, data from the SQL database is put back into the Informix/SQL database, and later still, back into the actual Informix database.
Basically, the root of my problem is that there are no foreign or primary key constraints on the non-Informix SQL database. Well, some of the tables have a Primary key on a non-meaningful "ID" column, but those aren't FK'd to any other tables.
My question is: Is it possible to link SQL Server 2000 to the native Informix database in some way, so that I can add foreign key constraints within the SQL database so that SQL Server can only create rows when it can refer to existing rows within the Informix database?
I'll do my best to answer any questions anyone has, but as far as I can tell the reasoning behind these design decisions was genuine insanity, so reasons won't be particularly forthcoming, as I can't work them out, myself...
Yuck!
Bad Luck (on the mess you've inherited)!
Good Luck (with your work fixing the mess)!
Which version of Informix, and what platform (type of machine, o/s) is it running on?
Is there a reason (other than it will break because the data is a mess) that you can't update the Informix schema to enforce the real RI constraints. But you probably need to know how bad the mess is so that you can start the cleanup process. IDS (Informix Dynamic Server) does have 'violations tables' which can be used to track problematic rows of data - 'START VIOLATIONS' and 'STOP VIOLATIONS' are the statements to look for in the Informix Guide to SQL: Syntax manual You might well need to unload and delete the data from one table before starting to load the data with the violations checking enabled.
After clarification, the question seems to be "Can I set up referential integrity constraints on tables in the SQL Server databases that are constrained by (refer to) tables in the Informix databases?"
The answer to that is (sadly):
No
Most DBMS are reluctant to have cross-database referential integrity constraints, let alone cross-DBMS constraints.
The closest approximation would be to have copies of the relevant Informix tables in the SQL Server databases, but that probably adds to the data transfer workload. OTOH, cleaning up the data probably requires that - it might be possible to relax that copying later, once the data is more nearly sane. It depends, in part, on the volatility of the referenced Informix data - how often are rows added or deleted to the referenced tables.