I try to add a CloudWatch Scheduled Event with the following cron expression:
cron(0 1 * * ? *)
I want to trigger this event every day at one o'clock.
But I always get the following error:
There was an error while saving rule dms-unstage-tibia.
Details: Parameter ScheduleExpression is not valid.
What is wrong in this cron expression?
Ok, i fixed it by my self. If you create a CloudWatch Scheduled Event directly in CloudWatch, you don't need the "cron()" syntax, only the expression inside. But if you create the event from Lambda, you have to write "cron()". Not very intuitive.
Related
I'm using DolphinDB subscribeTable but the error occurs:
I’d like to know what is wrong with my code. I wanted to subscribe to stream tables in DolphinDB and the code is:
csEngine1=createCrossSectionalEngine(name=sTb_Cs + "_eng",
dummyTable=objByName(sTb_join12),
keyColumn=`symbol,
triggeringPattern="perRow",
useSystemTime=false, timeColumn=`TimeStamp)
subscribeTable(tableName=sTb_join12, actionName="do"+sTb_Cs, offset=-1, handler=append!{csEngine1}, msgAsTable=true, hash=5, reconnect=true)
The UDF handler of another subscription to sTb_join12 also queries csEngine1.
def append_plan (csEngine1, candidates2, strategy, msg){
subscribeTable(tableName=sTb_join12, actionName="do" +strategy, offset=-1, handler=append_plan {csEngine1, candidates2, strategy}, msgAsTable=true, hash=6, reconnect=true)
Please tell why the error occurs and how I can fix it.
The problem is that the parameter in your function (append_plan in this case) was not mutable. With an immutable parameter, the object (csEngine1) will be set to readOnly when calling the function. If another thread writes this object (as shown in your first subscription), it will report the error.
Solution #1
Pass the same hash value in the two subscribeTable functions that subscribe to sTb_join12. With the same hash value, the two subscription tasks will be processed in the same thread in turn, which avoids concurrency and error reporting.
Solution #2
Simplify the two subscribeTable functions into one statement. ‘append!’ the cross-section engine with udf, and then continue with the above process to ensure the append operation and select query to the table CSEngines1 are sequentially processed. You can refer to the following example:
tradesCrossEngine008=createCrossSectionalEngine(name="tradesCrossEngine008", dummyTable=objByName(sTc_join12_testSTR), keyColumn=`Symbol, triggeringPattern=`perRow, useSystemTime=false, timeColumn=`TimeStamp)
def append_plan(msg){
getStreamEngine("tradesCrossEngine008").append!(msg)
select Symbol, rank(left_v1+left_v2) as rmk from getStreamEngine("tradesCrossEngine008")
}
subscribeTable(tableName=sTc_join12_testSTR, actionName="tradesCrossEngine008", offset=0, handler=append_plan, msgAsTable=true, hash=9, reconnect=true)
I'm a little bit confused on the cron job documentation for cloudwatch events. My goal is to create a cron job that run every day at 9am, 5pm, and 11pm EST. Does this look correct or did I do it wrong? It seems like cloudwatch uses UTC military time so I tried to convert it to EST.
I thought I was right, but I got the following error when trying to deploy cloudformation template via sam deploy
Parameter ScheduleExpression is not valid. (Service: AmazonCloudWatchEvents; Status Code: 400; Error Code: ValidationException
What is wrong with my cron job? I appreciate any help!
(SUN-SAT, 4,0,6)
UPDATED:
this below gets same error Parameter ScheduleExpression is not valid
Events:
CloudWatchEvent:
Type: Schedule
Properties:
Schedule: cron(0 0 9,17,23 ? * * *)
MemorySize: 128
Timeout: 100
You have to specify a value for all six required cron fields
This should satisfy all your requirements:
0 4,14,22 ? * * *
Generated using:
https://www.freeformatter.com/cron-expression-generator-quartz.html
There are a lot of other cron generators you can find online.
I perform a batch update on an OData v2 model, that contains several operations.
The update is performed in a single changeset, so that a single failed operation fails the whole update.
If one operation fails (due to business logic) and a message returns. Is there a way to know which operation triggered the message? The response I get contains the message text and nothing else that seems useful.
The error function is triggered for every failed operation, and contains the same message every time.
Maybe there is a specific way the message should be issued on the SAP backend?
The ABAP method /iwbep/if_message_container->ADD_MESSAGE has a parameter IV_KEY_TAB, but it does not seem to affect anything.
Edit:
Clarification following conversation.
My service does not return a list of messages, it performs updates. If one of the update operations fails with a message, I want to connect the message to the specific update that failed, preferably without modifying the message text.
An example of the error response I'm getting:
{
"error":{
"code":"SY/530",
"message":{
"lang":"en",
"value":"<My message text>"
},
"innererror":{
"application":{
"component_id":"",
"service_namespace":"/SAP/",
"service_id":"<My service>",
"service_version":"0001"
},
"transactionid":"",
"timestamp":"20181231084555.1576790",
"Error_Resolution":{
// Sap standard message here
},
"errordetails":[
{
"code":"<My message class>",
"message":"<My message text>",
"propertyref":"",
"severity":"error",
"target":""
},
{
"code":"/IWBEP/CX_MGW_BUSI_EXCEPTION",
"message":"An exception was raised.",
"propertyref":"",
"severity":"error",
"target":""
}
]
}
}
}
If you want to keep the same exact message for all operations the simplest way to be able to determine the message origin would be to add a specific 'tag' to it in the backend.
For example, you can fill the PARAMETER field of the message structure with a specific value for each operation. This way you can easily determine the origin in gateway or frontend.
If I understand your question correctly, you could try the following.
override the following DPC methods:
changeset_begin: set cv_defer_mode to abap_true
changeset_end: just redefine it, with nothing inside
changeset_process:
here you get a list of your requests in a table, which has the operation number (thats what you seek), and the key value structure (iwbep.blablabla) for the call.
loop over the table, and call the method for each of the entries.
put the result of each of the operations in the CT_CHANGESET_RESPONSE.
in case of one operation failing, you can raise the busi_exception in there and there you can access the actual operation number.
for further information about batch processing you can check out this link:
https://blogs.sap.com/2018/05/06/batch-request-in-sap-gateway/
is that what you meant?
I am trying to set a scheduler in order to set a cron expression.
<camel:endpoint id="sqlEndpoint" uri="sql:${sqlQuery}?scheduler=spring&scheduler.cron=0+6+8+*+*&dataSourceRef=veloxityDS&useIterator=false"/>
But when I run this as a consumer, this exception occured:
org.apache.camel.FailedToCreateConsumerException: Failed to create
Consumer for endpoint: Endpoint[sql://$select * from
dual?dataSourceRef=veloxityDS&scheduler=spring&scheduler.cron=0+6+8++&useIterator=false].
Reason: There are 1 scheduler parameters that couldn't be set on the
endpoint. Check the uri if the parameters are spelt correctly and that
they are properties of the endpoint. Unknown parameters=[{cron=0 6 8 *
*}]
Any ideas?
The endpoint you are trying to create is using parameters that don't exist. There is a full list of parameters at: http://camel.apache.org/sql-component.html
If you want your SQL procedure to run on a time interval you can either use a quartz endpoint, a polling consumer, or a route scheduler depending on your needs.
http://camel.apache.org/polling-consumer.html
http://camel.apache.org/quartz2.html
http://camel.apache.org/cronscheduledroutepolicy.html
Current parameter issues on your endpoint:
scheduler - not a supported parameter
scheduler.cron - not a supported parameter
dataSourceRef - deprecated.
Your scheduling alternatives leveraging only the sql endpoint are:
consumer.delay
consumer.initialDelay
consumer.useFixedDelay
maxMessagesPerPoll
If I use OnError event handler in my SSIS package, there are variables System::ErrorCode and System::ErrorDescription from which I can get the error information if any things fails while execution.
But I cant the find the same for OnTaskFailed event handler, i.e. How to get the ErrorCode and ErrorDescription from the OnTaskFailed event handler when any things fails while execution in case we want to only implement OnTaskFailed event handler for our package?
This might be helpful, it's a list of all system variables and when they are available.
http://msdn.microsoft.com/en-us/library/ms141788.aspx
I've just run into the same issue, and I've worked around it by :
Creating a #[ErrorCache] variable
In my case the task was being retried multiple times, so I needed an Expression task to reset the #[ErrorCache] variable at the beginning of each retry
Create an OnError event handler, which contains an Expression task purely to append the #[ErrorMessage] to the #[ErrorCache]
Create an OnTaskFailed event handler which then utilises the #[ErrorCache]
Go to the event handler of the task you want to monitor for errors and click on the link to create a new handler.Then create a task like Send Mail and create 2 variables: mail_header and mail_body.
IMPORTANT: Move the variables from the current scope to the OnError scope otherwise the values won't be available when processing the package.
Define the mail_subject variable as string and set the expression as: "Error " + #[System::TaskName] + " when executing " + #[System::PackageName] + " package."
Define the mail_body variable as string and set the expression as: REPLACENULL( #[System::ErrorDescription],"" ) + "\nNotify your system administrator."
On the task editor, create an expression assigning Subject to the #mail_subject variable. Define the MessageSourceType as a variable and set MessageSource to the #mail_body.
In the task that you put on the error event handler you can select parameters that are only available in an error handler such as system:Errordescription or system:Sourcename (which provideds the task that it failed on). We use these as input variables to a stored proc which inserts into an error table (and to send an email for a failed process) that stores other information beyond just the logging table. We also use the logging table to log our steps and in clude on error in that so general error information goes there.