Facing errors while SQL deployment in azure devops pipeline - sql

Am using sql DACPAC type of deployment in release pipeline of azure devops.but getting below error. I have no idea about SQL.Any suggestions?
Publishing to database 'database_name' on server 'Server_name'.
Initializing deployment (Start)
*** The column [dbo].[xxxx].[yyyy] is being dropped, data loss could
occur.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
*** If this deployment is executed, changes to [dbo].[xxx2] might
introduce run-time errors in [dbo].[yyyy2].
Analyzing deployment plan (Complete)
Updating database (Start)
An error occurred while the batch was being executed.
Updating database (Failed)

Agree with Michael's appointment.
The column [***] is being dropped, data loss could occur.
and
If this deployment is executed, changes to [] might introduce
run-time errors in [].
These are all expected which caused by against the security. I assume you did some changes into your database which can not sure whether it would break anything on target database. Now, it will block the deployment since the server can't determine whether the changes are secure.
The first solution is set /p:BlockOnPossibleDataLoss=false.
The BlockOnPossibleDataLoss default value is true, which means stop the deployment if possible data loss detected. And false let SqlPackage.exe ignore them.
So, please go the task, then locate to and input the above argument into Additional SqlPackage.exe Arguments:
The second solution is input
/p:TreatVerificationErrorsAsWarnings=true
Note: The second solution should be used if the first one does not work for you.
Set TreatVerificationErrorsAsWarnings=true means treating the verification errors as warnings to get a complete list of issues, and it can bypass the limitation of allowing the publish action to stop when the first error occurs.
See this doc to get more publish action.

Related

Capture Data State on Error at runtime - SQL Server

I have a question regarding error handling practices within SQL Server.
What I would like to accomplish is easy error re-creation. I have a very active SQL Server installation with constantly changing data in the tables I am interested in. It is modeling an active warehouse environment.
I've already built a generic error handler for all the stored procedures on this installation in order to track errors and log specifics about the cause of the error such as:
calling line (this gives the EXEC statement of the stored procedure as well as input variables)
error_message
error_state
error_number
error_line
etc.
What I am missing is reproducibility. Even if I were to run the same statement just a few minutes after being notified that an error occurred, I cannot be sure that my results would be the same due to the underlying data changing.
I would like to capture the state of the data on the database when the error occurred.
This could be something like a database image that I could then import into a clean SQL Server installation and execute the erring line in order to perfectly capture what was happening on the database the moment the error occurred.
Due to the nature of needing to capture this at runtime, I would prefer a light-weight solution. Perhaps only capturing the tables relevant to the failing statement.
Does anyone know if this is possible or has been done before? It is really only critical to try and suss out logical errors. It wouldn't be necessary for something like a deadlock.
I would ultimately turn these data subsets into XML or JSON and include them in the error log when appropriate.

SCCM - Task sequence with application failure

I have been having some issues lately when including applications in task sequences. When running any task sequence that contains applications, it automatically fails, and says "The referenced package cannot be found." I've checked my distribution points and boundary groups, and verified the application content is distributed. When checking the logs, it just states it failed to find an application, I track down the application it's referencing, and redistribute, or even remove it from the task sequence, and try running it again, I get the same error, except for the next application content ID's. When adding packages to a task sequence, it seems to run successfully. Has anyone else encountered this?
EDIT: I've also been seeing 'content hash value mismatch' errors in the logs.
Any help is greatly appreciated. Some extra info:
I have already restored the site server VM, rebuilt the distribution point.
Failed to find CCM_ApplicationCIAssignment object for AdvertID="***2017A", ModelName="ScopeId_E6E2F6FB-692F-4938-8AC6-146644EAE93F/Application_ce95b2ac-bf5a-4de2-b930-6f9b74b7dfd0"
"Failed to resolve selected task sequence dependencies. Code(0x80040104)"

How to define an optional change set in Liquibase?

We use Liquibase as database refactoring tool in a cloud service, and would now like to employ it to do some lightweight data migration, which would be realized as CustomTaskChange and would take just a few seconds. This data migration is 'nice to have' but it is by no means mandatory for the service to function properly - if it fails for some reason, the change set should just be skipped, the service started nevertheless, and the change set retried during the next restart of the service until it finally succeeds. So, errors when executing the change set should be ignored but the set marked as ran only after it actually ran successfully once.
We wonder how we could implement this kind of behavior using Liquibase: The <changeSet> attribute failOnError="false" continues in case of an error but according to documentation and an answer given by Nathan Voxland here at StackOverflow it always marks the change set as ran - hence Liquibase wouldn't retry to execute it during the next startup of the service. The <preConditions> attribute onFail seems to be concerned only with failing preconditions so startup would still fail in case of an error when setting onFail to, say, CONTINUE.
Is there any other option / attribute that we overlooked or a recommended fashion to solve this kind of situation?
You may be able to achieve the "retry until successful" behaviour if you implement the optional data migration inside the code of a custom precondition. Then, you could configure onFail of that precondition to CONTINUE which will give you the behaviour you want (source):
CONTINUE – Skip over the change set. Execution of the change set will be attempted again on the next update. Continue with the change log.
I'm not entirely sure if implementing the migration in the precondition code is technically possible – because it certainly wasn't meant for such things. And you also may want to verify that the custom precondition is not executed again once the patch set has been marked as ran.

ResourceNotFoundException with full deploy to prod

I have a fully developed set of functions which work fine in the "dev" stage and it's now time for me deploy to production. Unfortunately every time I try to deploy it goes for a long time but after printing "Checking Stack update progress" it fails with a 404 error:
An error occurred: SentinelLambdaFunction - Function not found: arn:aws:lambda:us-east-1:837955377040:function:xyz-services-prod-sentinel (Service: AWSLambda; Status Code: 404; Error Code: ResourceNotFoundException; Request ID: 38f86b7a-99cd-11e8-af06-fffd92e40dc5).
This error is non-sensical to me as this function does exist and executing precisely the same full deployment to "dev" results in no error. Note that in both environment/stage, we are deploying 10 functions with a fully deployment.
I tried removing the function which was being complained about first, with the hope that I could re-include it on a second deployment but then it simply complained about a different function not existing.
I also thought maybe the "--force" parameter might push this deployment into place but it has had no impact on the error I get.
The cycle time for each attempt is very long so I'd be very grateful if anyone could help to point me in the right direction on this.
Below is a screenshot of the output when run in "verbose" mode:
In attempt to get around the error I thought maybe I'd have a better chance if I went into CloudFormation and explicitly deleted the template for prod. I attempted to do this from the GUI and got the following:
This actually has further convinced me that this removal is important but I'm not sure what to do next.
For me, the solution was:
serverless remove
and then try deploying again.
So the solution to this problem was to ensure that all previous traces of the CloudFront stack was removed. In my case I had manually taken out a few functions from Lambda and the 401 errors I was getting were likely occuring in the removal attempts rather than my assumption that it was related to adding these functions.
Bear in mind you may find yourself -- like I did -- where the first attempt to delete fails. In this case try again and make sure to check off any checkboxes exposed by UI that indicate what had caused the issues the prior attempt.
Once I'd done that I was able to deploy as per normal from the serverless framework.

SQLCODE=-514 SQLSTATE=26501 occurred when I fnished the rebind operator

I want to make sure the new procedure valid, insteading of the DB2 always query by the cache pool, I have to rebind the database (db2rbind command). And then I deploy the application on WebSphere. BUT, when I login to the application, the error occurs:
The cursor "SQL_CURSN200C4" is not in a prepared state..SQLCODE=-514 SQLSTATE=26501,DRIVER=3.65.97
further more, the most weird thing is that the error just occurred only once. It will not never occur after this time, and the application runs very well. I'm so curious about how it occurs and the reason why it only occurs only once.
ps: my DB2 version is 10.1 Enterprise Server Edition.
and the sql which the error stack point to is very simple just like:
select * from table where 1=1 and field_name="123" with ur
Unless you configure otherwise (statementCacheSize=0) or manually use setPoolable(false) in your application, WebSphere Application Server data sources cache and reuse PreparedStatements. A rebind can cause statements in the cache to become invalid. Fortunately, WebSphere Application Server has built-in knowledge of the -514 error code and will purge the bad statement from the cache in response to an occurrence of this error, so that the invalidated prepared statement does not continue to be reused and cause additional errors to the application. You might be running into this situation, which could explain how the error occurs just once after the rebind.