Unexpected response code for operation : 99 in Azure table storage error - azure-storage

When I deployed the project in staging environment the values are stored correctly in azure table storage after I swapped the deployment production environment in azure this issue will be rising. I don't know what is the exact problem anyone help me.
Thanks in advance

This error message indicates that the 100th operation in a batch transaction (aka Group Entity Transaction) failed and therefore the entire batch transaction failed. If you enable Storage Analytics logging, each individual operation will have a separate log entry and therefore you can easily identify the issue.

Related

Avoid Deadlock During SSRS Reports Deployment

I wonder if anyone has any suggestion or experience with the same scenario.
We have one Server we utilise for our SSRS Reports. We deploy to Multiple Folders in SSRS i.e. Site_1, Site_2, Site_3 ... Site_26
In each site we deploy roughly about 800+ Reports. These reports are the same for Site_1 to Site_26 (except if we skip a site).
We use Azure DevOps with Powershell ReportingServicesTools to deploy the reports.
What happens is when we start the deployment, we will get several sites failing due to a deadlock with the below error:
The Report and Process ID is Random and never the same
##[error]Failed to create item Report.rdl : Failed to create catalog item C:\azagent\A9_work\r5\a\SSRS Reports\Reports\Report.rdl : Exception calling "CreateCatalogItem" with "7" argument(s): "System.Web.Services.Protocols.SoapException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> System.Data.SqlClient.SqlException: Transaction (Process ID 100) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
The error is not related to Low Disk etc as we've tested this to death and it occurs with two sites on a monster server. The error is Transaction Deadlock.
The only way we can successfully deploy the reports is if we deploy them concurrently one after the other. However, due to time constraints and business requirements this is not an option.
We have done all the PSSDiags etc and found that the error occurs due to this Stored Procedure "FindObjectsNonRecursive"
We nearly resolved it by adding the (NoLock) option but it seems this was only temporary and we're back to where we were. Microsoft also advised that they would not change it. Also noting that 18 months down the line MS still has not been able to give us a fix or a solution to our problem.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Thank you for your time.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Did you try retrying like the error suggests? Deadlocks are timing-dependent, so it should eventually succeed.

BigQuery Data Transfer does not delete sources

I am using an on-demand (as a test before automation) Bigquery Data Transfer job which loads data from Storage to a table.
All is working fine however I put "Delete source files after transfer |
true" and at the end no file is deleted. They are not loaded again but are always here in my Storage folder.
This deletion is vital since the amount of data could become quite big in a short period of time. I could delete them with another program but then the Transfer Service would become less interesting.
The job itself does not throw any error, which means that something is silently failing. Do you know what could possibly cause this ? Or maybe I am missing the meaning of this option ?
Thanks
Make sure you have enough permissions to do Cloud storage transfer, it won't tell explicitly what are the permissions are missing.
Required permissions
BigQuery
bigquery.transfers.update
Cloud Storage
storage.objects.get
storage.objects.list
storage.objects.delete
More info refer here

Clean up and prevent excessive data accumulation in an MobileFirst Analytics 8.0 environment

Our analytics data is taking up almost 100% disk space on the file system. How do we remove the old er data, and prevent such situation from occurring again?
You can follow the url, https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/server-configuration/#setting-up-jndi-properties-for-mobilefirst-server-web-applications to setup JNDI properties in Mobilefirst. You need to
set the TTL values base on you business requirements, and keep the values as short as possible, so that huge data accumulation does not occur again. To clean up the existing data, you can perform the following
Setup the Analytics server with JNDI properties set for TTL and other configuration
Stop the Analytics Server
Delete the /analyticsData directory contents to discard any initial data (this will not affect as there is no data accumulated yet. So that there is no directories within the analyticsData directory) Note:
/analyticsData is the default location, please refer
http://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/installation-configuration/production/analytics/configuration/ to verify the actual value in your environment.
Restart the Analytics server. (Now the index will be created brand new with TTL in effect causing the proper data purging in place)

Time-out occurred while waiting for buffer latch type 3 while processing MOLAP cube

This is the error I get from the Log while trying to process a SQL Server 2012 MOLAP Cube.
"Time-out occurred while waiting for buffer latch type 3 for page (1:2044928) database ID 2.; 42000." Source="Microsoft SQL Server 2012 Analysis Services" HelpFile="Error ErrorCode="3240034318" Description="Errors in the OLAP storage engine: An error occurred while processing the 'Measurement' partition of the measure group for the 'PE cube' cube from the Cube database."
I have scripted the processing task in XMLA and execute the processing via a SSAS Command in an Agent Job.
The first step is to Process Update all dimensions and this succeeds, but when I want to Process Data of the cube the load fails and this error pops up.
I first tried processing with an SSIS package, but this caused the whole server to crash instead of just the job failing. This leads me to believe this a performance issue, but the machine running the job is an Azure VM with 16 processors and 112 GB RAM so I don't know where to look. I also tried running the job without any other activities on the server, but that did not help.
The disk containing the SSAS Instance still has 500GB Free.
The measure group is querying a table containing 180 million records.
While processing the cube on a Dev server with way less data there are no issues. I once succeeded to Process Full the whole cube while processing the SSAS cube directly within SSAS, but via DTEXEC, SSISDB or using SSDT the processing results in a server crash.
Earlier I got different time-out errors, but after adjusting the SSAS ExternalCommandTimeOut, ExternalConnectionTimeOut and ForceCommitTimeout properties to 0 this did not occur anymore.
I have tried multiple processing settings, but because I think it is a performance issue I tried to make the processing as low as possible on performance.
Processing Settings:
Object: Cube; Option: Process Data;
Processing Order: Sequential with Seperate Transactions.
Writeback Table Option: Use Existing;
Do not process affected objects.
Update:
I have processed the measure which triggered the error on its own, this did not finish and in the Activity Monitor I saw a lot of Wait_Type IO_Completion and CXPacket. And when querying the sys.dm_exe_requests I see a Select with wait_type IO_Completion which is already running for a long time and a lot of reads.
Last night I tried to process all measurements excluding the measuregroup which triggered the error earlier, but unfortunately the whole server crashed again...
Update2:
We have looked into upgrading to premium storage, but this means we have to switch from A11 to a DS or GS serie. Meaning we need to resize the whole VM which contains live solutions resulting in down-time and effort to restore the VHDS and replacing the current OS disk which contains parts of live solutions.
Another option we identified is applying partitioning or improving the underlying queries from the measures. Unfortunately way more effort than anticipated, a quick work-around for now would help a lot in selling a long-term solution improvement.
Update3:
We have had contact with Microsoft and they advice to migrate from an A11 VM to a D14 V2 and upgrade to premium storage disks. This will be our next step and will be executed upcoming friday. After the migration I will update or close this post.
If you miss information, please let me know. Any suggestions that would help me pin-point the situation would be much appreciated!
The upgrade to a VM better suitable for the situation (DS14 V2) and upgrade to P30 premium storage disks have resolved the occuring issues. The issue was not in the way the cube was being processed or configured, but in the hardware used.

Change Dimension Storage Type as Real-time rolap but error occurs

I Create a measure group and two dimensions using [AdventureWorksDW2012], I try to change one dimension's storage mode with setting proactive caching as Real-Time ROlap. There is no any warning message when deploying and processing, but error occurs when I query in sql server analysis services, see below for the error messages and the screen capture.
Error occurred retrieving child nodes: the current operation was cancelled because another operation in the transaction failed.
Does somebody have a hit ?
Regards