We have an application ,where the session data of the application is stored in a table, from that table we have a SQL job which places the above data in one more table segregating it more meaningfully.
When we created the job ,the job passed in DEV environment and TEST ,but when we implemented the job in production and stage, the job is failing with below error.
Conversion failed when converting date and/or time from character
string
We tried restoring the DB to some other instance other than where the application DB resides and the SQL job is completing successfully. The Job is failing only in the instance where application DB resides.
Steps what we tried:
We tried comparing the SQL configuration of the instances where the job completed successfully to the instance where it is failing, no differences
we tried executing the stored proc manually writing some print statements to see if it really a code issue, this didn't helped us since the job is not failing for a particular session GUID and the same step is passing in DEV environment.
We are not able to figure out why this is happening only on instances where application DB resides.
"Conversion failed when converting date and/or time from character string". This is error is based on data. It has a string which is not in required format to be converted to a data. The issue is not with code, its with data. Add a preprocessing step to convert data to requires format.
Check the default language of the server account the job is being run under - my guess would be something similar to DEV/TEST's account being set to British English while LIVE is set to English.
However, even if that is the case, this still only indicates why the issue appears on LIVE. The underlying thing you should do to correct this is make sure that your job makes no assumptions about date formats, does not do any implicit date conversions, and holds dates in date variables/columns, not character ones.
Related
I m getting the above error when running the Web Job in multi-threaded environment. I m calling one stored procedure to perform some action, stored procedure has code which Inserts/Updates/Delete records from pretty big temporal tables(3-4M records[not sure if its relevant here]). Every time the job is run it deals with(Insert/Update) around 40K-80K records based on condition. When the single thread is running everything goes fine. But as soon as number of parallel jobs count is set to 2 or more I m getting the error. From initial analysis seems like issue is with Auto generated column values with for SysStartTime and SysEndTime in history table. I have tried one of the solution from internet to reduce 1 second from the date to be saved in those columns as below
DEFAULT (dateadd(second,(-1),sysutcdatetime()))
But its not working. I have read few articles where it says temporal tables does not work properly in multi-threaded environment. Now I m not sure why the issue is happening and how to resolve this in multi-threaded environment.
Can someone here please help me understanding the reason behind the error and how to fix it.
NOTE: I can't make my code to run on single thread. Minimum three threads are required. Converting to single thread is not solution in this case.
We got an error that {"message":"Invalid snapshot time 1472342794519, unable to read before 1472342794785","reason":"invalid"}. Other QA describes that such an error happens when table decorators' parameters are invalid, however, our query does not have table decorators.
The query uses TABLE_DATE_RANGE, but its arguments are date timestamp, so the lower digits must be 0s, not like that in the above error.
Retrying the same query succeeded.
I can provide the job ID, but because it includes internal information of our company. I apologize that I cannot directly write it here.
The tables that the TABLE_DATE_RANGE wildcard evaluates to are resolved as of the time of the start of the query. Looking at the timestamps, it looks like the table was deleted right after the job started execution. This causes the table resolution to throw that error.
I am using PDI to delete and insert some data from a DB. I have the following issue. I create two variables called START_DATE and END_DATE that are used to select the data that will be deleted from my DB. I am able to get them and run my transformation with no erors in the log file, but when I checked if data was deleted, I find it didn't. I send checked my "DeleteProcedure" step, and it says "Conversion error: null". I have tried different approached to take the variables and pass them as Strings, but I haven't been able to solve this issue. It cannot be a SQL mistake as I tested it with a constant and it works.
Any ideas? I attach some pics. Thanks!
As a documentation of the Execute SQL script says:
Note: When you have an issue, that the SQL is started at the initialization phase of the transformation and not for each row, make sure to check the option "Execute for each row" (see description below).
In your case it executes during the initialization phase of the transformation that's why it gets null values instead of ones from previous step.
While refreshing Webi report I am getting an error:
A database error occured. The database error text is: (CS) "Unexpected behavior" . (WIS 10901)
All the objects are parsing in the universe and Server is also responding. What can be the possible reason?
We are also able to run query in the database using database client tool.
If the error message appears after the a long time it might just be a timeout issue.
Else, you could try to import a version of the report that works in CMS to your local drive, rename it and run again.
It can be caused by some special character in the data combined with the fact that the server language settings do not foresee such character and therefore Business Objects cannot parse it for presentation.
If that is the case you might need to configure an environment variable of the server (like NLS_LANG) setting it to a value such that those special characters in your data can be handled by Business Objects.
In my situation, the error appera when some objet from the data base has changed or does not exists anymore. So we need to delete this object in the Universe or be sure that the field exists in the data base with the same name and type.
I had same problem with my reports. After couple hour of "investigation", I found.
I create Object in my universe, and set inappropriate type of object data Number, when value in database have type Character.
It throw me oracle Error (ORA-01722), and Bussiness Object error (WIS 10901), though SQL copied from report creator interface, executed directly on database return proper data.
I'm working with a moderately sized database of about 60,000 records. I am working on building a mobile application which will be able to check out a single table into a compact .sdf on for viewing and altering on the device, then allow the user to sync their changes back up with the main server and receive any new information.
I have set it up with the Sync Framework using a WCF Service Library. When setting up the connection for some reason the database won't let me check "Use SQL Server Change Tracking" and throws up the error:
"'Unable to initialize the client database, because the schema for table 'Inventory' could >not be retrieved by the GetSchema() method of DbServerSyncProvider. Make sure that you can >establish a connection to the client database and that either the >SelectIncrementalInsertsCommand property or the SelectIncrementalUpdatesCommand property of >the SyncAdapter is specified correctly."
So I leave it unchecked and set it to use some already created columns "AddDateTime" and "LastEditTime" it seems to work okay, and after a massive amount of tweaking I have it partially working. The changes on the device sync up perfectly with the database, updates, deletes, all get applied. However, changes on the server side...never get updated. I've made sure everything is set up right with the bidirectional setup so that shouldn't be the problem. And, I let it sit overnight so the database received ~500 new records, this morning it actually synced the latest 24 entries to the database...out of 500 new. So that should be further proof that it's able to receive information from the server, but for all useful purposes, it's not.
I've tried pretty much everything and I'm honestly getting close to losing it. If anyone has any ideas they can throw out I can chase after I would be most grateful.
I'm not sure if I just need to go back and figure out why I can't do it with the "SQL Server Change Tracking". Or if there is a simple explanation for why it's not actually syncing 99% of the changes on the server back to the client.
Also, the server database table schema can't be altered as a lot of other services use it. But the compact database can be whatever the heck in needs to be to just store the table and sync properly in both directions.
Thank you!!
Quick Overview:
Using WCF and syncing without SQL Server Change Tracking (Fully enabled on server and database)
Syncing changes from client to server works perfectly
Syncing from server back to the client not so much: out of 500 new entries overnight, on a sync it downloaded 24.
EDIT: JuneT got me thinking about the time and their anchors. When I synced this morning it pulled 54 of about 300 new added records. I went in to the line (there are about 60 or so columns, so I removed them for readability, this is kind of a joke)
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT [Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > #sync_last_received_anchor AND [LastEditDate] <= >#sync_new_received_anchor AND [AddDateTime] <= #sync_last_received_anchor)";
And replaced #sync_last_received_anchor with two DIFFERENT times. Upon syncing it now returns the rows trapped between those two and took out the middle one giving me:
this.SelectIncrementalUpdatesCommand.CommandText = #"SELECT[Value], [Value], [Value] FROM >TABLE WHERE ([LastEditDate] > '2012-06-13 01:03:07.470' AND [AddDateTime] <= '2012-06-14 >08:54:27.727')"; (NOTE: The second date is just the current time now)
Though it returned a few hundred more rows than initially planned (set the date gap for 600, it returned just over 800). It does in fact sync the client up with the the new server changes.
Can anyone explain why I can't use #sync_last_received_anchor and what I should be looking for. I suppose I could always add box that allows the user to select the date to begin updating from? Or maybe add some sort of xml file to store the sync date that would be updated anytime a sync was -successfully- completed?
Thanks!
EDIT:
Ran the SQL profiler on it...the date (#sync_last_received_anchor) is getting set to 8 hours ahead of whatever time it really is. I have no idea how or why it's doing this, but that would definitely make sense.
Turns out the anchors are collected like this:
this.SelectNewAnchorCommand.CommandText = "Select #sync_new_received_anchor = GETUTCDATE()";
That UTC date is what was causing the 8 hours gap. To fix it either change it to GETDATE(), or convert your columns to UTC time in the WHERE clause of the commands.
After spending many hours with many cups of coffee, I've figured out how to solve this error of mine. While I was running the code on desktop testing area, everything seemed to be working perfectly; however the same code and webservice on target device gave this error repeatedly. Then, suddenly, the "dbo_" prefixes on compact database table names started looking interesting, like they were trying to tell me something really important. So, I've listened...
Configuration.SyncTables.Add("Products);
on ClientSyncAgent.cs should be changed to
Configuration.SyncTables.Add("dbo_Products");
[Exeunt]