Define of Oracle LONG_RAW column fails in 11g with ORA-01062 - sql

I inherited the support of a small application that originally used Oracle 9. It is a basically a system to store and retrieve photographs in Oracle as LONG_RAW columns. The size of the photos range from 20k to over 4 meg. Everything works fine in Oracle 9.
I upgraded the database to Oracle 11g, and am having trouble reading some of the larger photos. The define statement for a LONG_RAW column holding a photo much larger than (I think) 100k fails with an ORA-01062 - unable to allocate memory for define buffer.
I spent all day yesterday googling this and the two most prevalent answers were:
do a piece wise fetch
increase the maximum buffer size
But alas, even more googling could not find how to do either one of these.
I think I could fix this if I could find where the maximum Oracle buffer size is set. I looked at all the .ora files I could find, but no joy.
I am stuck big time. Any help would be most appreciated. Thanks in advance.

Related

PostgreSQL: Setted a full column to null value & database size increased. Why?

I´m working with PostgreSQL. I have a database named db_as on it with 25.000.000 rows of data. I wanted to set some diskspace free so I updated a full column to null value thinking that I would decrease databases size, but it didnt happend, in fact, i did the oposite thing, I increased databases size, and i dont know why. It increased from 700MB to 1425MB, thats a lot :( .
I used this sentence to know each columns size:
SELECT sum(pg_column_size(_column)) as size FROM _table
And this one to know all the databases size:
SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database;
The original values will still be on disk, just dead.
Run a vacuum on the database to remove these.
vacuum full
Documentation
https://www.postgresql.org/docs/12/sql-vacuum.html

How to get MS SQL Server and database space info in straigthforward, simple way

I'm developing an app for which I need some simple information about a given Microsoft SQL Server:
The maximum size the server supports¹
The total free space still available for a database to grow²
I googled about this and even though there are many pages on the matter with many different answers, I couldn't find anything that suits me because either it was too fancy (e.g. writting complex tables to files) or didn't actually work. The context I'm in is a simple software to manage the database; it will get those info, store in integer variables and then use them to work. So my idea here is simply to do a query, select style, grab one data and move on. An example of the only "function" I managed to find that would fit these criteria:
SELECT size FROM sys.master_files WHERE DB_NAME(database_id) = \'DB_NAME_HERE\'²
(Note: this is just an example, the function doesn't actually return what I want)
¹: For example, the Express editions may have up til 10Gb of allowed storage capacity (it depends on the version). So the function here should return a value close to that.
²: In advance: to know my database's size doesn't matter since it's a server with many databases. My app's logic needs the free space available for any database to grow, so calculating "total space - my db space" won't work.

Limit to how many fields a '*' can hold in Teradata?

I have wasted about two hours messing around with character field lengths in my data to get around an error about 'Right truncation of string data' turns out it seems to be to do with the table '*' function.
It appears as if this operator can only hold a certain number of fields before throwing an error. Does any anyone know if this is the case? I am working on a large series of tables with hundreds of columns and manually stating them at each step in my job makes maintenance much more difficult. If this is a know issue, is there a way around it?
Current versions of Teradata are limited to 2048 columns per table.
To check database limits please refer to “SQL Reference: Fundamentals, Appendix C”.
If that is not your case, please provide some more info with your measurements.

out of memory sql execution

I have the following script:
SELECT
DEPT.F03 AS F03, DEPT.F238 AS F238, SDP.F04 AS F04, SDP.F1022 AS F1022,
CAT.F17 AS F17, CAT.F1023 AS F1023, CAT.F1946 AS F1946
FROM
DEPT_TAB DEPT
LEFT OUTER JOIN
SDP_TAB SDP ON SDP.F03 = DEPT.F03,
CAT_TAB CAT
ORDER BY
DEPT.F03
The tables are huge, when I execute the script in SQL Server directly it takes around 4 min to execute, but when I run it in the third party program (SMS LOC based on Delphi) it gives me the error
<msg> out of memory</msg> <sql> the code </sql>
Is there anyway I can lighten the script to be executed? or did anyone had the same problem and solved it somehow?
I remember having had to resort to the ROBUST PLAN query hint once on a query where the query-optimizer kind of lost track and tried to work it out in a way that the hardware couldn't handle.
=> http://technet.microsoft.com/en-us/library/ms181714.aspx
But I'm not sure I understand why it would work for one 'technology' and not another.
Then again, the error message might not be from SQL but rather from the 3rd-party program that gathers the output and does so in a 'less than ideal' way.
Consider adding paging to the user edit screen and the underlying data call. The point being you dont need to see all the rows at one time, but they are available to the user upon request.
This will alleviate much of your performance problem.
I had a project where I had to add over 7 million individual lines of T-SQL code via batch (couldn't figure out how to programatically leverage the new SEQUENCE command). The problem was that there was limited amount of memory available on my VM (I was allocated the max amount of memory for this VM). Because of the large amount lines of T-SQL code I had to first test how many lines it could take before the server crashed. For whatever reason, SQL (2012) doesn't release the memory it uses for large batch jobs such as mine (we're talking around 12 GB of memory) so I had to reboot the server every million or so lines. This is what you may have to do if resources are limited for your project.

Best approach for bringing 180K records into an app: core data: yes? csv vs xml?

I've built an app with a tiny amount of test data (clues & answers) that works fine. Now I need to think about bringing in a full set of clues & answers, which roughly 180K records (it's a word game). I am worried about speed and memory usage of course. Looking around the intertubes and my library, I have concluded that this is probably a job for core data. Within that approach however, I guess I can bring it in as a csv or as an xml (I can create either one from the raw data using a scripting language). I found some resources about how to handle each case. What I don't know is anything about overall speed and other issues that one might expect in using csv vs xml. The csv file is about 3.6 Mb and the data type is strings.
I know this is dangerously close to a non-question, but I need some advice as either approach requires a large coding commitment. So here are the questions:
For a file of this size and characteristics, would one expect csv or
xml to be a better approach? Is there some other
format/protocol/strategy that would make more sense?
Am I right to focus on core data?
Maybe I should throw some fake code here so the system doesn't keep warning me about asking a subjective question. But I have to try! Thanks for any guidance. Links to discussions appreciated.
As for file size CSV will always be smaller compared to an xml file as it contains only the raw data in ascii format. Consider the following 3 rows and 3 columns.
Column1, Column2, Column3
1, 2, 3
4, 5, 6
7, 8, 9
Compared to it's XML counter part which is not even including schema information in it. It is also in ascii format but the rowX and the ColumnX have to be repeated mutliple times throughout the file. Compression of course could help fix this but I'm guessing even with compression the CSV will still be smaller.
<root>
<row1>
<Column1>1</Column1>
<Column2>2</Column2>
<Column3>3</Column3>
</row1>
<row2>
<Column1>4</Column1>
<Column2>5</Column2>
<Column3>6</Column3>
</row2>
<row3>
<Column1>7</Column1>
<Column2>8</Column2>
<Column3>9</Column3>
</row3>
</root>
As for your other questions sorry I can not help there.
This is large enough that the i/o time difference will be noticeable, and where the CSV is - what? 10x smaller? the processing time difference (whichever is faster) will be negligible compared to the difference in reading it in. And CSV should be faster, outside of I/O too.
Whether to use core data depends on what features of core data you hope to exploit. I'm guessing the only one is query, and it might be worth it for that, although if it's just a simple mapping from clue to answer, you might just want to read the whole thing in from the CSV file into an NSMutableDictionary. Access will be faster.