Aerospike: LDT Sub-Record Create Error Note:-ldt-enabled is true - aerospike

Using Aerospike 3.7.3 and Large stack. Please help.
I am facing following error.
Mar 19 2016 05:00:17 GMT: WARNING (ldt): (ldt_aerospike.c::507) crec_create: LDT Sub-Record Create Error [rv=-1]... Fail

(I work at Aerospike) The Large Stack data type has been deprecated. Also, the Large Data Types in general are not at the same level of maturity as the rest of the platform and are not recommended for production use. There are some edge cases situations were Large Data types would end up corrupted. I would recommend running a scan (backup would do) which should print details on the bad LDTs that can then be removed from the system (and potentially re-inserted). I would highly recommend finding alternate data modeling without LDTs.

Related

Big Query The job encountered an error during execution

I've had this query in BigQuery that I have been updating every day for the last few months. It's been fine - some occasional errors, but retrying has solved the problem.
Bet last few days I am getting the error: The job encountered an error during execution. Retrying the job may solve the problem.
The error description says that it's an external error, so how can I fix that?
I have been retrying (with rather long pauses in between), but I still get the error.
JobID example: bquxjob_152ced5d_169917f0145
Does anyone have any idea what's going on? Is there any data/time limitations I might encounter (but why just the last few days then)?
You can use CGP stackdriver to monitor your BigQuery process using this URL
Interesting information you can find among others are the queryTime heatmap and the Slot usage which might help you understand your problems better
On the subject of the external table usage, you can use Google transfer (See this link for details) to schedule a repeated transfer from CSV to BigQuery table.
The below Image show you how to get to the transfer set up page from the webUI
I encountered this dreadfully useless error in a scheduled query. It was working great and then one day it stopped working at all and has been failing ever since without any other explanation. The StackDriver (now "Logs Explorer") showed nothing more enlightening:
jobStatus: {
errorResult: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
errors: [
0: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
]
jobState: "DONE"
}
Figuring out the actual issue takes a long time because scheduled queries start slowly since they use BATCH priority. What I found in my case was that the partitioned table and "Partition field" setting in the scheduled query was the culprit. I dropped the table and removed the partition field and voila the thing works again (although far from ideal since I need partitioning).
I hope this helps someone else running up against that useless error but in any case, I hope the good folks working on BigQuery find a better error to bubble up.
I ran into this problem when replacing the contents of a partitioned table. Two retries did not help. When I removed the --range_partitioning from the command the update was processed correctly. The table remained partitioned.
So there seems to be an issue about updates to partitioned tables, and when that is the cause these errors might not benefit from retry. I don't know whether there are other causes of this error.
This kind of issue probably has a lot to do with BigQuery quota errors : https://cloud.google.com/bigquery/docs/troubleshoot-quotas#ts-number-column-partition-quota, as mentionned by other answers, such as the 4000 partitions-by-table quota.

Optimizer internal error while loading data from U-SQL table

Is there a way to get around this error.
"CQO: Internal Error - Optimizer internal error. Assert:
a_drgcidChild->CLength() == UlSafeCLength(popMS->Pdrgcid()) in
rlstreamset.cpp:499"
Facing this issue while loading data from partitioned U-SQL table.
#myData =
SELECT *
FROM dbo.MyTable;
If you encounter any system error message (or something that says Internal Error), please open a support ticket with us and/or send me your job link (if it happens on the cluster) or a self-contained smallest repro (if it is happening with local run) to usql at microsoft dot com.
Thanks
Michael
UPDATE: This issue has been fixed and will be made available in the next refresh. If you are blocked, please contact me for a private runtime.

Read access violation related to input(variable,anydtdtm.);

Somebody tell me I'm not crazy. I have SAS on a server, and I'm running the following code:
data wtf;
a=".123456 1 1";
b=input(a,anydtdtm.);
run;
If I run this on my local computer, no problem. If I run this on the server, I get:
ERROR: An exception has been encountered.
Please contact technical support and provide them with the following traceback information:
The SAS task name is [DATASTEP]
ERROR: Read Access Violation DATASTEP
Exception occurred at (04E0AB8C)
Task Traceback
Address Frame (DBGHELP API Version 4.0 rev 5)
0000000004E0AB8C 0000000009C4EC20 sasxdtu:tkvercn1+0x9B4C
0000000004E030D9 0000000009C4F100 sasxdtu:tkvercn1+0x2099
0000000005FF14BE 0000000009C4F108 uwianydt:tkvercn1+0x47E
0000000002438026 0000000009C4F178 tkmk:tkBoot+0x162E6
Does anyone else get this error???
This is an internal bug that cannot be resolved by the user. You'll need to send this information, your environment description, and the exact steps to recreate the bug over to SAS Technical Support to open up an investigation and determine a workaround.
If your server is a database not made up of .sas7bdat files, it might be due to the SAS/ACCESS engine attempting to translate the function into a way that the server's language can understand, but is unable to do so properly; that is, it might think it's doing it correctly, but it's not. There are special cases where this can occur, and you may have discovered one.
If you are in fact querying some other database, try adding this before running the data step:
options sastrace=',,,d' sastraceloc=saslog;
This will show all of the steps as SAS sends data & functions to and from the server, and may help give some insight.
I am getting the same error on Linux system running SAS 9.4
AUTOMATIC SYSSCP LIN X64
AUTOMATIC SYSSCPL Linux
AUTOMATIC SYSVER 9.4
AUTOMATIC SYSVLONG 9.04.01M3P062415
AUTOMATIC SYSVLONG4 9.04.01M3P06242015
Until SAS can fix the informat you probably need to add additional testing in your code to exclude strange values like that.

SAS and Oracle error: ORA-04031

I am using an ORACLE db with SAS/connect.
I recently implemented a change in my libname statement (a week ago) in which I added the following (don't know if related to issue):
insertbuff=10000 updatebuff=10000 readbuff=10000
Starting yesterday, I have been having an ORACLE issue when, after doing a
proc sql;
drop table oralib.mytable;
quit;
data oralib.mytable;
set work.mytable;
run;
I get the following error:
ERROR: ERROR: ERROR: ORACLE execute error: ORA-04031: unable to
allocate 4160 bytes of shared memory ("shared pool","unknown
object","sga heap(1,0)","modification "). With the occurrence of the above ERROR, the error limit of 1 set by the
ERRLIMIT= option has been reached. ROLLBACK has been issued(Any Rows processed after the last COMMIT are lost).
Total rows processed: 1001
Rows failed : 1
It seems to happen randomly on any table of any size. Sometimes it will go through, sometimes (most of the times) it won't. Is there a shared pool release I should do from SAS?
Thanks for your help!
The shared pool is a memory structure on Oracle which keeps the following stuff:
data dictionary cache
SQL query and PL/SQL function result caches
storage for recently executed code in its parsed form
It is possible to flush the shared pool, but this is not a good idea and I would not recommend it. What you have to do is size the shared pool of the database properly. Note that the shared pool is a pool for the entire Oracle instance - it is not on a per user base. So, if there are other users of the database, they might contribute the problem. I doubt that any particular query is the cause and I guess that the problem is that the shared pool is undersized.
In case you have some DBA privileges granted for your user, you can check the current shared pool size by running the following query:
SELECT * FROM v$sgainfo;
You can increase the size of the shared pool with the following query
ALTER SYSTEM SET SHARED_POOL_SIZE = 200M;
Nevertheless, the best solution will be turn to the DBA managing the database (if there is such).
I'm not a SAS guy, so, I'll answer your question from the POV of an Oracle DBA, which is what I am.
ORA-04031 means you're running out of space in the shared pool. With a product like SAS, I'm sure they have a recommended minimum size that the shared pool should be. So, you should check the SAP installaiton documentation, and confirm whether your database has a large enough shared pool size set. (Use show parameter shared_pool_size to see what size it's set in yourr database.)
Second, I'm not familiar with the changes you made, and I'm not sure if that would have an effect on the shared pool utilization. Perhaps check the SAS documentation on that one.
Third, it could be an Oracle bug. You should check My Oracle Support for your version of Oracle, and do a search on ORA-04031, with those specific arguments you are seeing in your error message. If it's a known bug, there may be a patch already available.
If it's none of the above, you may need to open an SR with Oracle.
Hope that helps.

ORA-4030 Oracle : How to resolve

In our oracle server(10 g), we are getting ORA-4030 error on sometimes.
ORA-04030: out of process memory when trying to allocate nn bytes
We understood it is related with memory size adjustment. We are trying some memory settings.
Other than this, wanted to know,
(1) Any specific SQL query usages will be cause this kind of error
(2) any Oracle SQL query tuning can be applied to avoid this
Your replies will help.
Thanks in advance.
1) The sorts,distinct, group and join hashes are the most probably to give you this error!
2) What OS do you use? In linux you can see what resources do you for your users with ulimit -a.
You should increase the memory per process for PGA.
Regards
One thing which could be contributing to the error, is not freeing cursors. In .net a SQLStatement = a db cursor. Make sure that the applications are closing (and disposing) the SQL statements it is using.