DB2 Regular expression REGEXP_INSTR works perfectly using host variables(pl1 program), but somehow it has issues when the input string has more than 360 characters, trailing spaces would not be a issue.
3 Info CHAR(378),
EXEC SQL
SELECT REGEXP_INSTR(:Info,
:RG_EXPR,
1,
1)
INTO :REGEXP_START
FROM SYSIBM.DUAL;
Error Message:
SQL0302N The value of a host variable in the EXECUTE or OPEN statement is out of range for its corresponding use. .SQL
STATE=22001.
Edit: Issue seems to be resovlved when I use a VAR CHAR variable instead. But this issue occurs with non-var char field with large inputs.
This is less to do with regexp_instr() than simple fundamentals.
Instead, it concerns Db2 database fundamentals, specifically the maximum length of fixed length character strings.
A fixed length character string (data type CHAR in SQL) in Db2 can occupy between 1 and 255 bytes.
A variable length character string (database type VARCHAR and others) can occupy between 1 and 32,672 bytes .
If you need longer length strings, then you need to use large objects (for example CLOB which allows up to 2gigabytes).
Please refer to the documentation for your Db2-version on your Db2 platform (z/os, i series, linux/unix/windows).
Your host variables need to reflect these rules, and your host variables must match (or be fully compatible with) the database columns or result-set columns to/from which they are assigned.
Related
I need help on how to resolve characters of unknown type from a database field into a readable format, because I need to overwrite this value on database level with another valid value (in the exact format the application stores it in) to automate system copy acitvities.
I have a proprietary application that also allows users to configure it in via the frontend. This configuration data gets stored in a table and the values of a configuration property are stored in a column of type "BLOB". For the here desired value, I provide a valid URL in the application frontend (like http://myserver:8080). However, what gets stored in the database is not readable (some square characters). I tried all sorts of conversion functions of HANA (HEX, binary), simple, and in a cascaded way (e.g. first to binary, then to varchar) to make it readable. Also, I tried it the other way around and make the value that I want to insert appear in the correct format (conversion to BLOL over hex or binary) but this does not work either. I copied the value to clipboard and compared it to all sorts of character set tables (although I am not sure if this can work at all).
My conversion tries look somewhat like this:
SELECT TO_ALPHANUM('') FROM DUMMY;
while the brackets would contain the characters in question. I cant even print them here.
How can one approach this and maybe find out the character set that is used by this application? I would be grateful for some more ideas.
What you have in your BLOB column is a series of bytes. As you mentioned, these bytes have been written by an application that uses an unknown character set.
In order to interpret those bytes correctly, you need to know the character set as this is literally the mapping of bytes to characters or character identifiers (e.g. code points in UTF).
Now, HANA doesn't come with a whole lot of options to work on LOB data in the first place and for C(haracter)LOB data most manipulations implicitly perform a conversion to a string data type.
So, what I would recommend is to write a custom application that is able to read out the BLOB bytes and perform the conversion in that custom app. Once successfully converted into a string you can store the data in a new NVCLOB field that keeps it in UTF-8 encoding.
You will have to know the character set in the first place, though. No way around that.
I assume you are on Oracle. You can convert BLOB to CLOB as described here.
http://www.dba-oracle.com/t_convert_blob_to_clob_script.htm
In case of your example try this query:
select UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(<your_blob_value)) from dual;
Obviously this only works for values below 32767 characters.
I working with SQL Server data base in order to store very long Unicode string. The field is from type 'ntext', which theoretically should be limit to 2^30 Unicode characters.
From MSDN documentation:
ntext
Variable-length Unicode data with a maximum string length of 2^30 - 1 (1,073,741,823) bytes. Storage size, in bytes, is two times the string length that is entered. The ISO synonym for ntext is national
text.
I'm made this test:
Generate 50,000 characters string.
Run an Update SQL statement
UPDATE [table]
SET Response='... 50,000 character string...'
WHERE ID='593BCBC0-EC1E-4850-93B0-3A9A9EB83123'
Check the result - what actually stored in the field at the end.
The result was that the field [Response] contain only 43,679 characters. All the characters at the end of the string was thrown out.
Why this happens? How I can fix this?
If this is really the capacity limit of this data type (ntext), which another data type can store longer Unicode string?
Based on what I've seen, you may just only be able to copy 43679 characters. It is storing all the characters, they're in the db(check this with Select Len(Reponse) From [table] Where... to verify this), and SSMS has problem copying more than when you go to look at the full data.
NTEXT datatype is deprecated and you should use NVARCHAR(MAX).
I see two possible explanations:
Your ODBC driver you use to connect to database truncate parameter value when it is too long (try using SSMS)
You write you generate your input string. I suspect you generate CHAR(0) which is Null literal
If second is your case make sure you cannot generate \0 char.
EDIT:
I don't know how you check the length but keep in mind that LEN does not count trailing whitespaces
SELECT LEN('aa ') AS length -- 2
,DATALENGTH('aa ') AS datalength -- 7
Last possible solution I see you do sth like:
SELECT 'aa aaaa'
-- result in SSMS `aa aaaa`: so when you count you lose all multiple whitespaces
Check query below if returns 100k:
SELECT DATALENGTH(ntext_column)
For all bytes; Grid result on right click and click save result to file.
Can confirm. The actual limit is 43679. Had a problem with a subscription service for a week now. Every data looked good, but it still gave us an error that one of the fields have invalid values, even tho, it got correct values in. It turned out that the parameters was stored in NText and it maxed out at 43679 characters. And because we cannot change the database design, we had to make 2 different subscriptions for the same thing and put half of the entities to the other one.
Hi I am using postgresql 9.2 and I want to use varchar(n) to store some long string but I don't know the maximum length of character which varchar(n) supports. and which one is better to use so could you please suggest me? thanks
tl;dr: 1 GB (each character (really: codepoint) may be represented by 1 or more bytes, depending on where they are on a unicode plane - assuming a UTF-8 encoded database). You should always use text datatype for arbitrary-length character data in Postgresql now.
Explanation:
varchar(n) and text use the same backend storage type (varlena): a variable length byte array with a 32bit length counter. For indexing behavior text may even have some performance benefits. It is considered a best practice in Postgres to use text type for new development; varchar(n) remains for SQL standard support reasons. NB: varchar() (with empty brackets) is a Postgres-specific alias for text.
See also:
http://www.postgresql.org/about/
According to the official documentation ( http://www.postgresql.org/docs/9.2/static/datatype-character.html ):
In any case, the longest possible character string that can be stored is about 1 GB. (The maximum value that will be allowed for n in the data type declaration is less than that. It wouldn't be useful to change this because with multibyte character encodings the number of characters and bytes can be quite different. If you desire to store long strings with no specific upper limit, use text or character varying without a length specifier, rather than making up an arbitrary length limit.)
Searching online reveals that the maximum value allowed varies depending on the installation and compilation options, some users report a maximum of 10485760 characters (10MiB exactly, assuming 1-byte-per-character fixed encoding).
By "the installation and compilation options" I mean that you can always build PostgreSQL from source yourself and before you compile PostgreSQL to make your own database server you can configure how it stores text to change the maximum amount you can store - but if you do this then it means you might run into trouble if you try to use your database files with a "normal", non-customized build of PostgreSQL.
Is there is a way to find out the number of bytes used by a particular field value (which may or may not be longer than 4000 characters) in an Oracle SQL query?
dbms_lob.getLength() returns the number of characters not bytes and I can't just do a straight multiplication since there are a variable number of bytes per character in this character set. Briefly wondered about using dbms_lob.converttoblob() but this appears to need PL/SQL and I need to do this directly in a single query.
Use Oracle function LENGTHB() to get this result.
there is a way around convert CLOB to BLOB by using DBMS_LOB.CONVERTTOBLOB and use DBMS_LOB.GETLENGTH(). this is will return no of bytes.
As I haven't received a satisfactory answer yet I'm currently resorting to using dbms_lob.getlength() to get the number of characters and then multiplying by 2. This is based on a comment here about the AL32UTF8 character set:
https://forums.oracle.com/forums/thread.jspa?threadID=2133623
Almost all characters require 2 bytes of storage with a handful of
special characters requiring 4 bytes of storage.
Haven't verified how true this is but the person sounded like they knew what they were talking about so am currently using it as a "best guess".
what is the maximum value of data type INTEGER in sqlite3 ?
How do you store ip address in database ?
What is attached ?
How to create table which belongs to a specific database using sql ddl?
What is this error about ?
error while the list of system
catalogue : no such table: temp.sqlite_master
Unable to execute statement
Does sqlite3 text data type supoports unicode?
Thanks.
Look at http://www.sqlite.org/datatype3.html Minimum is -(263) == -9223372036854775808 and maximum is 263 - 1 == 9223372036854775807
I would think you should use a varchar
http://www.sqlite.org/lang_attach.html
http://www.sqlite.org/lang_createtable.html
might be of help SQLite 'no such table' error
in general check out the sqlite documentation
INTEGER. The value is a signed
integer, stored in 1, 2, 3, 4, 6, or 8
bytes depending on the magnitude of
the value.
The INTEGER storage class, for
example, includes 6 different integer
datatypes of different lengths. This
makes a difference on disk. But as
soon as INTEGER values are read off of
disk and into memory for processing,
they are converted to the most general
datatype (8-byte signed integer).
from http://www.sqlite.org/datatype3.html
Unless you have some other reason not to, you can store IP address using TEXT.
Regarding the second question:
You can store IP address in DB in 2 ways:
As a String. This is recommended
as it will support both IPv4 and
IPv6 and does not require no
additional hassle with IP address
conversions.
As an integer. IP is basically 4 bytes that can all be merged into one integer value. However, do you really want that? That will give you loads of pain converting it to/from string any time it is required.
How do you store ip address in database ?
The easiest way is to store the string form (e.g., “127.0.0.1” or “::1”) since then you can read them manually and reparsing to an address structure (if you have to) is easy. SQLite likes strings (which use the TEXT type) and handles them efficiently.
Does sqlite3 text data type supports
unicode?
Yes and no.
Yes in that SQLite lets you store TEXT data in UTF-8 or UTF-16. (Use PRAGMA ENCODING to choose the internal format.)
No in that the built-in LOWER and UPPER functions only affect ASCII characters. But you can redefine functions and collations to add this support. There's an ICU extension to SQLite that does this.