Will the dbspace change when the value in the row changes under FRAGMENT BY EXPRESSION? (Informix DB) - documentation

Unfortunately, I have not found an explanation for this situation in the documentation.
In particular, I created a table and fragmented it "FRAGMENT BY EXPRESSION" by the "value" field. In the fragmentation condition, I wrote:
value < 100 IN dbspace_1,
value >= 100 IN dbspace_2.
For example, "value" in the row is 85, so the row is located in dbspace_1. If I update the value to 110, will this row move to dbspace_2?

Yes. If it didn't it would invalidate the fragment expression schema. You can verify this yourself if you have a test instance by doing the update and then looking at the onlog output. You should see a delete from the partition in dbspace1 followed by an insert into the partition in dbspace 2 in a single transaction (this would be easiest to see on an unused system where you could switch to a completely unused logical log to help easily spot the transaction used for the update statement).

Related

Oracle error code ORA-00913 - IN CLAUSE limitation with more than 65000 values (Used OR condition for every 1k values)

My application team is trying to fetch 85,000 values from a table using a SELECT query that is being built on the fly by their program.
SELECT * FROM TEST_TABLE
WHERE (
ID IN (00001,00002, ..., 01000)
OR ID IN (01001,01002, ..., 02000)
...
OR ID IN (84001,84002, ..., 85000)
));
But i am getting an error "ORA-00913 too many values".
If I reduce the in clause to only 65,000 values, I am not getting this error. Is there any limitation of values for the IN CLAUSE (accompanied by OR clause)
The issue isn't about in lists; it is about a limit on the number of or-delimited compound conditions. I believe the limit applies not to or specifically, but to any compound conditions using any combination of or, and and not, with or without parentheses. And, importantly, this doesn't seem to be documented anywhere, nor acknowledged by anyone at Oracle.
As you clearly know already, there is a limit of 1000 items in an in list - and you have worked around that.
The parser expands an in condition as a compound, or-delimited condition. The limit that applies to you is the one I mentioned already.
The limit is 65,535 "atomic" conditions (put together with or, and, not). It is not difficult to write examples that confirm this.
The better question is why (and, of course, how to work around it).
My suspicion: To evaluate such compound conditions, the compiled code must use a stack, which is very likely implemented as an array. The array is indexed by unsigned 16-bit integers (why so small, only Oracle can tell). So the stack size can be no more than 2^16 = 65,536; and actually only one less, because Oracle thinks that array indexes start at 1, not at 0 - so they lose one index value (0).
Workaround: create a temporary table to store your 85,000 values. Note that the idea of using tuples (artificial as it is) allows you to overcome the 1000 values limit for a single in list, but it does not work around the limit of 65,535 "atomic" conditions in an or-delimited compound condition; this limit applies in the most general case, regardless of where the conditions come from originally (in lists or anything else).
More information on AskTom - you may want to start at the bottom (my comments, which are the last ones in the threads):
https://asktom.oracle.com/pls/apex/f?p=100:11:10737011707014::::P11_QUESTION_ID:9530196000346534356#9545388800346146842
https://asktom.oracle.com/pls/apex/f?p=100:11:10737011707014::::P11_QUESTION_ID:778625947169#9545394700346458835

How to skip records during appending to table when column is NULL but was REQUIRED?

I would like to skip rows during querying data to table which have NULL values on REQUIRED column.
Now whole query is failing.
I would like to have similar behaviour to
the flag configuration.load.maxBadRecords which is only for JSON/CSV.
It skips bad records and according to following answer stores bad records in status.errors field.
I didn't test above flag but I see that it is connected with configuration.load.ignoreUnknownValues and could be probably useful when records are not able to being parsed (eg. due to invalid characters)
Anyway in my case error is caused by NULL values when they are REQUIRED?
I would be grateful also for some reasonable workarounds.
For me setting default values by IFNULL is not the option.
I want also avoid detect such rows programatically.

String or binary data would be truncated -- Heisenberg problem

When you get this error, the first thing you ask is, which column? Unfortunately, SQL Server is no help here. So you start doing trial and error. Well, right now I have a statement like:
INSERT tbl (A, B, C, D, E, F, G)
SELECT A, B * 2, C, D, E, q.F, G
FROM tbl
,othertable q
WHERE etc etc
Note that
Some values are modified or linked in from another table, but most values are coming from the original table, so they can't really cause truncation going back to the same field (that I know of).
Eliminating fields one at a time eventually makes the error go away, if I do it cumulatively, but — and here's the kicker — it doesn't matter which fields I eliminate. It's as if SQL Server is objecting to the total length of the row, which I doubt, since there are only about 40 fields in all, and nothing large.
Anyone ever seen this before?
Thanks.
UPDATE: I have also done "horizontal" testing, by filtering out the SELECT, with much the same result. In other words, if I say
WHERE id BETWEEN 1 AND 100: Error
WHERE id BETWEEN 1 AND 50: No error
WHERE id BETWEEN 50 AND 100: No error
I tried many combinations, and it cannot be limited to a single row.
Although the table had no keys, constraints, indexes, or triggers, it did have statistics, and therein lay the problem. I killed all the table's stats using this script
http://sqlqueryarchive.blogspot.com/2007/04/drop-all-statistics-2005.html
And voila, the INSERT was back to running fine. Why are the statistics causing this error? I don't know, but that's another problem...
UPDATE: This error came back even with the stats deleted. Because I was convinced that the message itself was inaccurate (there is no evidence of truncation), I went with this solution instead:
SET ANSI_WARNINGS OFF
INSERT ...
SET ANSI_WARNINGS ON
Okay, it's more of a hack than a solution, but it allows me — and hopefully someone else — to move on to other things.
Is there a reason you can't simply cast the fields as the structural equivalent of their destination column like so:
Select Cast(A as varchar(42))
, Cast(B * 2 as Decimal(18,4))
, Cast(C As varchar(10))
...
From Table
The downside to this approach is that it will truncate the text values at their character limit. However, if you are "sure" that this shouldn't happen, then no harm will come.
In some cases you can run into a problem if you have any other column with default values which might cause the problem.
Ex. you might have added a column to trace the user who created the row, like USER_ENTERED with default value of suser_sname() but the column length is less than the current username.
There is a maximum row size limit in SQL Server 2005. See here.
Most of the time you'll run into this w/lots of nvarchar columns.
Yes, when I ran into this, I had to create another table/tables which mimic the current structure. I then did not change the code, but changed my data type sizes to all nvarchar (MAX) for each field till it stopped, then eliminated them one by one. Yes long and dragged out but I had major issues trying anything else. Once I tried a bunch of stuff that was causing too much of a headache I just decided to take the "Cave Man" Approach as we laughed about it later.
Also I have seen a similar issue with FKs, where you must ask:
What are the foriegn key constraints? Are there any?
Since there are not any , try this guy's DataMgr component:
http://www.bryantwebconsulting.com/blog/index.cfm/2005/11/21/truncated
Also check this out:
http://forums.databasejournal.com/showthread.php?t=41969
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=138456
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=97349
You could also dump the table you are selecting from to a temp table and find out what line gives the error if it errors out to a temp table.
It should tell you the line in the error, if you put each column on another line, it should tell you exactly where it is bombing.
If it looks like "total length" then do you have audit trigger concatenating the columns for logging?
Edit: after your update, I really would consider the fact you have a trigger causing this...
Edit 2, after seeing your statistics answer...
Because the total stats attribute length was probably greater then 900 bytes?
Not sure if this applies to statistics though and I'm not convinced.
Do you have a reference please because I'd like to know why stats would truncate when they are simply binary histograms (IIRC)

Records visible but not accessible in MySQL - why?

This is a weird issue. I'm accessing my online database using premiumsofts Navicat for mysql. Some of the records are behaving very strange - let me give an example. I have the following table columns id, name, address, abbreviation, contact. Now when I run a sql query for lets say any entry that has the abbreviation 'ab' it returns zero however such an entry already exists in the database.
Whats even weirder is that when I view the table in navicat - I notice the field of abbreviation is empty for that tuple which has the required value but when I hover over it or highlight it - I can see the value. Its there but its inaccessible and likewise this is a problem with many other tuples in the table.
What could the problem be here - I even tried to delete and recreate the table by executing a dump file but no good came out of that. Help please :(
Check that there aren't any invisible characters at the beginning of the string (like a carriage return or something).
As you can see from the following example, there can be some junk extra character like A0 and should be removed using update.
mysql> select add_code, unhex(replace(hex(add_code), 'A0', '')) from old_new limit 1\G
*************************** 1. row ***************************
add_code: 000242�
unhex(replace(hex(add_code), 'A0', '')): 000242
1 row in set (1.32 sec)
http://dev.mysql.com/doc/refman/5.0/en/string-functions.html#function_unhex

Will using and updating the same field in one UPDATE statment cause undefined behaviour?

Here is an example
UPDATE duration = datediff(ss, statustime, getdate()), statustime = getdate() where id = 2009
Is the field duration going to be assigned an undefined assigned value since statustime is getting used and assigned in the same statement? (i.e. positive value if datediff processed first or negative if statustime is processed first)
I can definitely update it in two separate statements but I am curious it is possible to update it in one statement.
No. Both values are calculated before either assignment is made.
Update:
I tracked down the ANSI-92 spec, and section 13.10 on the UPDATE statement says this:
The <value expression>s are effectively evaluated for each row
of T before updating any row of T.
The only other applicable rules refer to secion 9.2, but that only deals with one assignment in isolation.
There is some room for ambiguity here: it could calculate and update all statustime rows first and all duration rows afterward and still technically follow the spec, but that would be a very ... odd ... way to implement it.
My gut instinct says 'no', but this will vary depending on the SQL implementation, query parser, and so on. Your best bet in situations like these is to run a quick experiment on your server (wrap it in a transaction to keep it from modifying data), and see how your particular implementation behaves.