SQL Divides the data into 2 after Dot Issue - sql

I have a table which I update using a stored procedure. One column is for the image Url. The code in the stored procedure looks like:
UPDATE Products
SET ImageUrl = 'https://images.XXXXXXX.com/lm/image/s/'+RIGHT(Source,2)+'/'+Source+'_'+Code+'.203'
I need the url to be in a single line in the cell however, it splits the url right before .203 when writing it into the cell. So, in the cell, it is like;
https://images.XXXX.com/lm/image/s/ab/g671235_12312
.203
It normally is no problem for me but I use this data in XML. And Since the Url is not in one line, the remote server I connect does not update the image when I submit the XML. When I manually fix the Url and put it in one line, it just works fine. I googled and searched to find a way to fix this issue, but I could not find a solution. Any help will be appreciated.
Thanks

It sounds like there is a carriage return/newline feed captured within your "Code" column. I have encountered this many times when users manually fiddle with values (they update the value and then hit thinking this will apply the value when this is just inserting the r\n\ values within the column).
To double check if this is the case:
Locate the value that is causing this line break to appear
Copy that cell value (I assume you are using SSMS)
Open Notepad++, Sublime, or similar editor that will display non-printable characters
Set your editor to display non-printable characters (in Notepad++ it is View > Show Symbols > Show All Characters)
This should then display the line break characters that are causing your headaches
Quick solution: Scrub \r\n values - https://stackoverflow.com/a/951705/8026186
More Ideal Solution Prevention of \n\r insertion
If you have access to the data being entered in the database, the best way to avoid this is to prevent the \r\n from making it into the cell in the first place. The quick solution will work in case you don't have the leverage to control initial input. However, from past experience, you will want to avoid non-printable values from appearing in the first place.
Hope this helps!

Related

Is there a way to reduce gap between two column headings in DB2

I am working on IBM I series VR7, and running SQL(DB2) using CLLE.
I have a SQL procedure in a TXT file, having below command to create a table in QTEMP.
create table qtemp.FILE1 as (
select
Field1,Field2,Field3,.....Field10 from FILE2 ) with data;
I am calling the above procedure from CLLE using below command.
RUNSQLSTM SRCFILE(MyLib/MySrc) SRCMBR(Proc_txt) COMMIT(*NONE)
And then running below command to generate the spool.
RUNQRY QRYFILE((FILE1)) OUTTYPE(*PRINTER) OUTFORM(*DETAIL) FORMSIZE(60 132)
FORMTYPE(*STD) COPIES(1) LINESPACE(1)
The issue I am facing is that I am getting 2 white spaces between columns while creating the table using the create table command. When that table is converted into a spool file using above RUNQRY command, the fields on the right side truncates as my report width is 132 by default and I can not change it.
If the white spaces in the table created can be reduced to 1, my issue will be resolved.
The SQL I am using IBM i Series' default and DB2 as database. I don't have much idea about their version.
Edit2: Another issue I had was of report having a field in second line. Actually as per requirement a field had to be in the second row under another field. For example I needed field10 under field5. I have fixed it too, read my answer below.
Hope it helps people in need but I really doubt.
Edit1: I have updated the question as requested. Any help would be much appreciated. Thanks.
The short answer is that yes you can define the report to have 1 space between columns, but you have to define the Query400 object to do that. Unfortunately this is not a good place to write a tutorial for Query400. I can get you started though.
Type wrkqry, press enter.
Then put the cursor on the query name field, and press F4. You are now in the tool. You need to create a new query, and define everything about it in this tool. Play around with it, and see if that helps you.
I was able to get what I needed. As others have suggested, I have finally used WRKQRY to control the column spacing. Reduced the column spacing to 1 and was able to get the columns needed in the 132 width.
Another issue I had was of report having a field in second line. Actually as per requirement a field had to be in the second row under another field. For example I needed field10 under field5. So what I did was, I used the Line wrapping feature available in WRKQRY.
How I did:
Create a WRKQRY object and select the file needed.
Sequenced the field I needed in second line, to the bottom.
Go to Select Output Type and Output Form and take Y on Line Wrapping field. Put the
wrapping width equal to your report width. Leave other fields as required.
This way each record will have 10th field in next row, if it has data. You can add as
many as fields.
You may have to add some white spaces to the field for proper alignment. I would
suggest to create a new field and use concat(||) operator available in WRKQRY.
Thanks everyone for helping.

MS Access Error updating memo field with long text

Searching this problem returns quite a few search hits, but many off-track answers, so I'm posting a concise description here, and answer below.
The problem afflicts Microsoft Access 2010, and some versions before. Access 2013 renames Memo type to Long Text. I don't know if it has the same problem.
The root problem is associated with running an UPDATE query on a table with a memo field, in certain particular circumstances. This might be an UPDATE query composed in the visual query window, or some VBA running SQL via DAO or ADO or similar. Or it could arise while updating via a form.
(The current post is concerned with this occurrence just within an Access database, though elsewhere you will find discussion of similar-sounding issues when Access is connected to an external database server.)
Instead of generating an immediate and obvious error alert, Access (or perhaps Jet) places the value #Error (which is not just the string "#Error"!) into the Memo field. This might easily go unnoticed until some later time, resulting in visible errors such as:
-- You use Compact and Repair. That seems to complete, but Access quietly adds a MSysCompactError table with a couple of rows. One error -1611 complains that Access was stopped and couldn't complete the operation. A second, more-specific-seeming error complains that it can't find field "Description". That appears to be an internal error that has no relevance.
-- You try to copy the table to another database. Access gives an error complaining that another user is using the table or has updated the table, and won't complete the operation.
-- Other operations on the rows that, unnoticed by you, happen to contain the #Error values fail.
Regardless, the root problem is whatever causes the #Error values to get placed into the Memo fields in the first place.
Many posters have noted that it occurs if the UPDATE attempts to put strings longer than about 2000 characters into the Memo field. That's a surprise, as Memo fields should be able to hold 1 gig characters or more depending on version, even if it only allows 65k through the UI.
So why does the error occur when Updating using >2000 characters?
The key factor that provokes this error is the Memo field having an index. Apparently, although the Memo type field can hold a bazillion characters, the index can't deal with more than about 2000.
Knowing that this is the precipitating factor, probably a number of workarounds come to mind. First, you can obviously just disable the index. This solution is easy to verify in a dummy database: Create two tables containing Memo fields, one with an index and the other without. Run update queries that put >2000 characters into each Memo and note the results.
But perhaps you think you need the index? Your use case might be satisfied if you create a second field that will contain an initial substring of the main Memo (shorter than 2000 characters), and index that instead. This could be used for sorting purposes for example. In most cases, where a memo contains narrative information, it's unlikely that the memo data values differ only after 2000 characters. Or perhaps you can devise a hash function and make a separate column of that.
What if you have a database that already contains these #Error values? Some advice floating around on the web, especially in relation to downstream problems like failure of Compact and Repair, suggests that your database is corrupt and should be abandoned. I'm not so sure. If you can delete the #Error-afflicted rows, then delete the index, and then recreate the deleted rows, you may be back in business. Compact and Repair should run properly at that point, giving some confidence that you fixed the offending part. (Make backups along the way, obviously.)
Workaround solution
Create two macros (Macro1 Macro2)
Macro 1
Get all the necessary information from the open form which includ this long text and close it.
Macro 2
Insert all needed actions (starting with the update query that you get error)
Create a form (Form_on_error) with only a button that run Macro2
Finally add at the end of macro 1
On Error
Go to :Macro Name
Macro Name: On_Error_2590
RunMacro Macro2
Submacro On_error_2590
OpenForm (Form_on_error)
End Submacro
.......and it works !!!
So, only when the update query get error, the user must click the button on the form : Form_on_error

Excel Vlookup Missing data unless re-typed or select and enter?

Google has not found the solution i need so i thought i would try the genius on here to the never ending Excel issue I'm having.
Running a banking reconciliation workbook and slowly adding bits of VBA together to automate some of the tasks, one I'm working on now is finding large quantities of money and renaming their Identifier from a bank statement to same ID in our cashbook to they are found and will balance out.
To do this I'm running a IF(Vlookup()) returning a yes or no on the cash value and then reordering them once they are found so i can line them up and match them correctly.
The main issue I've got here is the vlookup is ignoring some values which i can see and saying no not found and i messed around figuring out why and until i clicked to edit the cell and then pressed enter not changing the amount and all of a sudden found and it only finds it in the vlookup if i click the cell and press enter.
I have tried Formatting, changing calculation to automate and tweaked the vlookup to include a +0 as well as changing the exact match to approximate and it still won't find it, i even tried trimming and checked the Len for whitespace and both equal the same.
Currently trying a for loop to select a cell and change it to itself so it mimics the select and enter but it runs slow and crashes.
Anyone got a decent idea of fixing this miss when searching
This often happens to me when pasting data from somewhere. It may have been pasted as text but then when you edit and press enter it changes to numeric.
The solution is to use =VALUE() to change the numbers to numeric.
Or when you paste the data from another source choose paste special as text.
VLOOKUP works strangely when is asked to do an approximate match with the look-up table unsorted by the look-up column.
If you're sure that an exact match should be enforced in your look-up column, try something along the lines of:
VLOOKUP(<lookup_value>, <table_array>, <col_index_num>, FALSE)
where <lookup_value>, <table_array>, <col_index_num> should be replaced with the values that you use in your look-up.

Find or Strip Invalid characters from Database

We are using a database where the front end software has allowed the input of invalid characters. (I have no control or re-writing of the software.)
The types of characters are carriage returns, line breaks, �, ¶, basically anything that is not 0-9, a-z or standard punctuation causes us issues with the database and how we use the data.
I'm looking for a way to scan the entire database to identify these invalid codes and either display them as results or strip them out?
I had been looking at This site wondering if there was a way of searching for a certain range? But I might be barking up the wrong tree.
I'm fairly new to SQL so be gentle with me, thanks.
The only way I could think to do this would be to write a stored procedure which uses system tables to get a list of all fields in the database/schema in question. Have it exclude system tables (or only include those that are user defined) then dynamically write out SQL update statements based on the columns/tables found in the system table inquiries. Using regular expressions or character removal like in this article
The system tables in question are:
SELECT
table_name,column_name
FROM
information_schema.columns
Psudo code would be:
Get list of tables we want to do this for
For each table in list
get list of columns for table that have string data.
For each column in table
generate update statement to strip unwanted characters
--Consider writing out table, column key, before after values to history table. incase this
has to be undone.
--Consider counter so I have an idea of what was updated
execute updatestatement
next column
next table
write out counter
Since you say
the data then moves to a second program that cannot handle these
characters and this causes the process to fail.
I'm wondering if you can leave the unreadable data where it is and create a new column for changed data that's only populated if/when the 2nd process fails. You'll still have to test every character of the data in the failed cell, but you wouldn't have to test every character of every row. After you determine the updated text to process, you can call the 2nd process again with the updated value.

Get carriage returns and tabs into buffer from database row

This one has me stumped.
I have a database column that stores a message body that containing tabs and carriage returns. I want to take that value and store it in another database by using cut and paste.
When I do a select on the row I want and use ctrl-c, ctrl-v into the other database insert statement, the value gets put into the new table minus the carriage returns and tabs.
There must be a simple way to preserve those characters, any help would be appreciated!
Ok, finally figured it out!
First, in SQL Management Studio, went to Tools->Options->Query Results and changed the output from 256k to a much higher figure to prevent truncation.
Then I did output to file (versus to grid), then opened up the file in a text editor and voila! All special characters are preserved (and you can copy to the buffer).
Not sure if this is the best solution since it is very hacky, but works nonetheless!