I have a fully developed set of functions which work fine in the "dev" stage and it's now time for me deploy to production. Unfortunately every time I try to deploy it goes for a long time but after printing "Checking Stack update progress" it fails with a 404 error:
An error occurred: SentinelLambdaFunction - Function not found: arn:aws:lambda:us-east-1:837955377040:function:xyz-services-prod-sentinel (Service: AWSLambda; Status Code: 404; Error Code: ResourceNotFoundException; Request ID: 38f86b7a-99cd-11e8-af06-fffd92e40dc5).
This error is non-sensical to me as this function does exist and executing precisely the same full deployment to "dev" results in no error. Note that in both environment/stage, we are deploying 10 functions with a fully deployment.
I tried removing the function which was being complained about first, with the hope that I could re-include it on a second deployment but then it simply complained about a different function not existing.
I also thought maybe the "--force" parameter might push this deployment into place but it has had no impact on the error I get.
The cycle time for each attempt is very long so I'd be very grateful if anyone could help to point me in the right direction on this.
Below is a screenshot of the output when run in "verbose" mode:
In attempt to get around the error I thought maybe I'd have a better chance if I went into CloudFormation and explicitly deleted the template for prod. I attempted to do this from the GUI and got the following:
This actually has further convinced me that this removal is important but I'm not sure what to do next.
For me, the solution was:
serverless remove
and then try deploying again.
So the solution to this problem was to ensure that all previous traces of the CloudFront stack was removed. In my case I had manually taken out a few functions from Lambda and the 401 errors I was getting were likely occuring in the removal attempts rather than my assumption that it was related to adding these functions.
Bear in mind you may find yourself -- like I did -- where the first attempt to delete fails. In this case try again and make sure to check off any checkboxes exposed by UI that indicate what had caused the issues the prior attempt.
Once I'd done that I was able to deploy as per normal from the serverless framework.
Related
I suppose that this is more of a curiosity as opposed to an actual issue, but I thought I'd ask about it anyway. There are times when an uncaught error occurs in a server-side NetSuite script using SuiteScript 2.0/2.1 (2.x), but instead of seeing a "SYSTEM" scripting log entry, there's nothing. It gives the appearance of a script just stopping for no reason. Now, I know this can easily be avoided by wrapping everything within a try-catch block, but that's not what I'm trying to discuss here.
Does anyone have any insight into why a script would just stop without any SYSTEM error logging? It's just something I find interesting given that with the 1.0 API uncaught errors would always get logged. And it's not a guarantee that an uncaught error won't be logged as a SYSTEM log. It seems more common with map/reduce scripts, but unless memory is not serving correctly I believe that I have seen it happen with suitelets and user event scripts, too.
Just thought that I'd pose the question here to see if there was anyone who might know a little something about it.
This is actually covered in the system help for Map/Reduce scripts. They do fail silently. I've not seen this in any other script type.
Apologies first of all if there is an answer to this elsewhere on the site. I've checked some of the proposed solutions and can't find anything appropriate.
So I've got this SSRS report that works fine when deployed but won't run locally during testing. The main query itself works when run in the query editor, as do all the sub queries that provide data for parameter drop lists but when I try to preview it, I get the error.
Bear in mind it used to work, up until the end of last year, which was when it was last updated.
I've tried removing all the tables and matrices on a copy (replacing with one very simple table), the parameters went too and I still get the error. I've also downloaded the server version, renamed it and redeployed it, works online, but not locally. As the error message is brutally vague, I've run out of ideas of things to try. Apart from switching over to PowerBI, can anyone think of anything else I could do to understand where the error is from?
Possibly relevant - the main query has some recursion in a subquery, but only a couple of levels. Could this be related? As I've said before, it used to work...
PS I'm using VS 16.7.2 from server V13.0.4466.4
PPS I also added the query to a brand new report and it errored so I think it must be something related to the SQL itself?
Am using sql DACPAC type of deployment in release pipeline of azure devops.but getting below error. I have no idea about SQL.Any suggestions?
Publishing to database 'database_name' on server 'Server_name'.
Initializing deployment (Start)
*** The column [dbo].[xxxx].[yyyy] is being dropped, data loss could
occur.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
*** If this deployment is executed, changes to [dbo].[xxx2] might
introduce run-time errors in [dbo].[yyyy2].
Analyzing deployment plan (Complete)
Updating database (Start)
An error occurred while the batch was being executed.
Updating database (Failed)
Agree with Michael's appointment.
The column [***] is being dropped, data loss could occur.
and
If this deployment is executed, changes to [] might introduce
run-time errors in [].
These are all expected which caused by against the security. I assume you did some changes into your database which can not sure whether it would break anything on target database. Now, it will block the deployment since the server can't determine whether the changes are secure.
The first solution is set /p:BlockOnPossibleDataLoss=false.
The BlockOnPossibleDataLoss default value is true, which means stop the deployment if possible data loss detected. And false let SqlPackage.exe ignore them.
So, please go the task, then locate to and input the above argument into Additional SqlPackage.exe Arguments:
The second solution is input
/p:TreatVerificationErrorsAsWarnings=true
Note: The second solution should be used if the first one does not work for you.
Set TreatVerificationErrorsAsWarnings=true means treating the verification errors as warnings to get a complete list of issues, and it can bypass the limitation of allowing the publish action to stop when the first error occurs.
See this doc to get more publish action.
I've got a graph that isn't behaving as it should in CloudConnect.
I'm running it locally, and it's completing, but not doing its work.
In an effort to figure out why this is, I've added printLog calls in many places, like the following
printLog(warn, 'transfrom from file ' + $in.0.fileName);
printLog(debug, 'joining etc');
The Phase consists of a FileList into a SimpleCopy, into a LookupJoin, a Reformat (produce SQL) and a DBInsert.
However, while I see logs for phases above, I'm not seeing anything produced in the log for any part of my phase. All parts of the phase do report running successfully in log. I've also done Enable Debugging on all connections in this phase.
Am I missing something to enable logging? Is there a better way to debug processing in CloudConnect?
Discovered the problem - the FileList will succeed if the source file cannot be found, but none of the subsequent steps will then fire. It's somewhat unintuitive, since the log files says 'succeeded'.
For debugging, after run you can access the data by right clicking on the connection, and selecting "View Data"
Sorry for the elementary question, but documentation didn't seem to cover this clearly, at least for a GoodData noob. I'll leave it up for anyone with the same problem!
I have written a cron to run every 30 minutes in scheduled-action-services-context.xml file
However I see that it is not working, when I check the log I can find only this error.
For my cron, I have also used lucene search. So I beleive this error is regarding that, so kindly help me in fixing it. Here is the error:
ERROR [quartz.core.JobRunShell] [DefaultScheduler_Worker-8] Job jobGroup.jobD threw an unhandled Exception:
org.alfresco.repo.search.impl.lucene.LuceneQueryParserException: 03020086
The error log you show is most likely the reason behind your scheduled action not properly working. In facts, it seems that the action is properly scheduled, but it then fails to complete as you provided an invalid Lucene query. Without the query itself or any other detail such as the relevant Spring config or action implementation details, I can only tell you to:
double check the lucene query
verify that that error log appears precisely when you would expect your action to be scheduled