CRM 2013 SP1 - Duplicate call to MessageProcessor start processing message:'Retrieve' for entity:'account' - dynamics-crm-2013

I have a CRM 2013 SP1 on-premise setup.
This is my scenario. I want to log every form visit for the Account entity so I created a plugin which hooks onto the Retrieve call of the Account enitity.
That's where the problem starts, I am getting a duplicate entry for each single form view.
First, I thought I had some error in my plugin, but it's so basic that it's not doing any extra retrieve so that's not the case. It's not a problem with the depth of the context, see from the example trace below.
I did a trace on the CRM server and I can see the two Retrieve calls in the trace log, both seem "legit" calls.
What I have done so far to debug:
Looked at the IIS access log and checked for multiple form hits, that's not the case.
Disabled the plugin and made sure no other "external" plugins are hooked into the Account entity.
Stripped the Account form, removed the Social pane, and actually all other fields from the form, except mandatory 'name'
Started CRM trace-ing on different organization on the same CRM server and saw the same behavior, that is, Retrieve request being made two times for a single form open action. That org was "clean" if you can say so, has not been modified.
Example output from the tracelog (not complete) which shows the timestamps:
[2014-08-15 16:22:11.2] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x3D
>MessageProcessor start processing message:'Retrieve' for entity:'account' correlationId:17242856-5c45-484c-b79a-0d102988390a depth:1 last updated at: 08/15/2014 16:22:11.
[2014-08-15 16:22:11.2] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x2DC
>MessageProcessor finish processing message 'Retrieve' for 'account'.
[2014-08-15 16:22:12.8] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x3D
>MessageProcessor start processing message:'Retrieve' for entity:'account' correlationId:cfe4c47e-31f4-4dfd-8fc9-7ed26187d4b4 depth:1 last updated at: 08/15/2014 16:22:12.
[2014-08-15 16:22:12.9] Process: w3wp |Organization:317fc566-698a-e311-93ec-00155d030401 |Thread: 35 |Category: Platform |User: 0574cc0c-364b-4347-93c8-9411e8291c01 |Level: Info | ReqId: 1c905b2e-1e40-4ca3-b743-0ae7ef7b313e | MessageProcessor.Execute ilOffset = 0x2DC
>MessageProcessor finish processing message 'Retrieve' for 'account'.
I kind of out of ideas and that's where you come to mind, any ideas?

Related

How to change the Splunk Forwarder Formatting of my Logs?

I am using our Enterprise's Splunk forawarder which seems to be logging events in splunk like this which makes reading splunk logs a bit difficult.
{"log":"[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO rdt.damien.services.CaseServiceImpl CaseServiceImpl :: showCase :: Case Created \n","stream":"stdout","time":"2021-01-19T15:30:57.24005568Z"}
However, there are different Orgs in our Sibling Enterprise who log splunks thus which is far more readable. (No relation between us and them in tech so not able to leverage their tech support to triage this)
[http-nio-8443-exec-7] 15 Jan 2021 21:08:49,511+0000 INFO DaoOImpl [{applicationSystemCode=dao-app, userId=ANONYMOUS, webAnalyticsCorrelationId=|}]: This is a sample log
Please note the difference in logs (mine vs other):
{"log":"[https-jsse-nio-8443-exec-5]..
vs
[http-nio-8443-exec-7]...
Our Enterprise team is struggling to determine what causes this. I checked my app.log which looks ok (logged using Log4J) and doesn't have the aforementioned {"log" :...} entry.
[https-jsse-nio-8443-exec-5] 19 Jan 2021 15:30:57,237+0000 UTC INFO
rdt.damien.services.CaseServiceImpl CaseServiceImpl:: showCase :: Case
Created
Could someone guide me as to where could the problem/configuration lie that is causing the Splunk Forwarder to send the logs with the {"log":... format to splunk? I thought it was something to do with JSON type vs RAW which I too dont understand if its the cause and if it is - what configs are driving that?
Over the course of investigation - I found that is not SPLUNK thats doing this but rather the docker container. The docker container defaults to json-file that writes the outputs to the /var/lib/docker/containers folder with the **-json postfix which contains the logs in the `{"log" : <EVENT NAME} format.
I need to figure out how to fix the docker logging (aka the docker logging driver) to write in a non-json format.

TFS 2015.4 Unable to start or detach a collection

I'm having an issue with a particular team project collection on my TFS2015.4. This collection caused issues when I wanted to upgrade TFS sometime ago as well. I was able to detach it in TFS2013.3 and then upgraded. Now I want to upgrade to TFS2017 and I don't know how to resolve the issue with this collection.
TF400868: Job definition not found for JobId d891ac97-ddf1-42df-8242-3cd4bd607790
Here is the current status:
Won't detach
Won't start
Status -> ApplyPatch -> won't execute
Stays Offline
One project inside and stays at 'Deleting' state
If I try to start, I get this error:
TF400783: The host 'MyDAS' cannot be started. The host is in the process of being serviced. The servicing may have failed and needs to be restarted and completed before the host can be started.
I did a pre-production upgrade to TFS2017 and there was a validation error with this collection's state that prevented me from finishing the upgrade.
The detailed log for ApplyPatch Rerun has just one failure point:
[12:29:58.457] +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
[12:29:58.457] Executing step: Populate commit changes
[12:29:58.457] Executing step: 'Populate commit changes' Git.M83PopulateCommitChanges (1017 of 1201)
[12:29:58.477] [Error] TF400868: Job definition not found for JobId d891ac97-ddf1-42df-8242-3cd4bd607790.
[12:29:58.480] Microsoft.TeamFoundation.Framework.Server.JobDefinitionNotFoundException: TF400868: Job definition not found for JobId d891ac97-ddf1-42df-8242-3cd4bd607790.
[12:29:58.480] at Microsoft.TeamFoundation.Framework.Server.TeamFoundationJobService.ResolveJobPriorityClasses(IVssRequestContext requestContext, IEnumerable`1 jobReferences, ITFLogger logger)
[12:29:58.480] at Microsoft.TeamFoundation.Framework.Server.TeamFoundationJobService.QueueJobsRaw(IVssRequestContext requestContext, IEnumerable`1 jobReferences, JobPriorityLevel priorityLevel, Int32 maxDelaySeconds, ITFLogger logger, Boolean queueAsDormant)
[12:29:58.480] at Microsoft.TeamFoundation.Server.Deploy.TFCollection.GitStepPerformer.M83PopulateCommitChanges(IVssRequestContext requestContext, ServicingContext servicingContext)
[12:29:58.480] at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformHostStep(String servicingOperation, ServicingOperationTarget target, IServicingStep servicingStep, String stepData, ServicingContext servicingContext)
[12:29:58.480] at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformStep(String servicingOperation, ServicingOperationTarget target, String stepType, String stepData, ServicingContext servicingContext)
[12:29:58.480] at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)
[12:29:58.480] Step failed: Populate commit changes. Execution time: 23 milliseconds.
[12:29:58.480] [StepDuration] 0.0236576
[12:29:58.480] [GroupDuration] 0.2517195
[12:29:58.480] [OperationDuration] 0.2517302
[12:29:58.587] Clearing dictionary, removing all items.
======================================================================================================
Step execution times in descending order
======================================================================================================
Updates all rows in tbl_GitCommit and sets the Status to ... (GitToDev14M83Collection, ToDev14M83Collection) - 227 milliseconds
Populate commit changes (GitToDev14M83Collection, ToDev14M83Collection) - 23 milliseconds
Write service level to stamp (StartInstallUpdates, StartInstallUpdates) - 20 milliseconds
Configure framework servicing tokens (VsspToDev14M71Collection, VsspToDev14M71Collection) - 20 milliseconds
Setup integration environment (TestManagementToDev12M65FinalConfiguration, ToDev12M65FinalConfiguration) - 3 milliseconds
Setup Git environment (GitToDev14M74Collection, ToDev14M74Collection) - 1 millisecond
Setup Git environment (GitToDev14M83Collection, ToDev14M83Collection) - 1 millisecond
Set the collection partition id tokens in servicing context (GitToDev14M83Collection, ToDev14M83Collection) - 1 millisecond
======================================================================================================
Execution times by group in descending order
======================================================================================================
GitToDev14M83Collection (ToDev14M83Collection) - 250 milliseconds
StartInstallUpdates (StartInstallUpdates) - 20 milliseconds
VsspToDev14M71Collection (VsspToDev14M71Collection) - 20 milliseconds
TestManagementToDev12M65FinalConfiguration (ToDev12M65FinalConfiguration) - 3 milliseconds
GitToDev14M74Collection (ToDev14M74Collection) - 1 millisecond
I was pondering the fact that this is a rouge job that needs to be deleted from the databse manually but I mmight be wrong. Any pointer will be greatfully +1-ed.
The main reason for TFS being unable to upgrade team project collections usually is that their data was already either corrupted, incomplete, or stuck between schema versions.
You will need to contact customer support for assistance troubleshooting these data issues and getting your data back to a workable state.
Since there is only one project inside. If you don't care the source control history and have the back up of project. A crude (not recommend) way should be deleting the special project collection, delete the database from SQL Server and create a new collection, restore the project inside.
Also take a look at this similar question with the same error as you: Starting a team project collection after detaching and attaching again
Ultimately, there was no way to recover the collection. So I used the admin command to permanently remove the collection. In my case, n one came back to me asking where the collection go so it's solved for me. I'm running the latest TFS 2017.2 and the rest of the dev are happy.

Too Many Events Using DTF InstallLogModes

I'm currently logging "everything" using the following flags:
const DTF.InstallLogModes logEverything = DTF.InstallLogModes.FatalExit |
DTF.InstallLogModes.Error |
DTF.InstallLogModes.Warning |
DTF.InstallLogModes.User |
DTF.InstallLogModes.Info |
DTF.InstallLogModes.ResolveSource |
DTF.InstallLogModes.OutOfDiskSpace |
DTF.InstallLogModes.ActionStart |
DTF.InstallLogModes.ActionData |
DTF.InstallLogModes.CommonData |
DTF.InstallLogModes.Progress |
DTF.InstallLogModes.Initialize |
DTF.InstallLogModes.Terminate |
DTF.InstallLogModes.ShowDialog;
DTF.Installer.SetInternalUI(DTF.InstallUIOptions.Silent);
var handler = new DTF.ExternalUIRecordHandler(ProcessMessage);
DTF.Installer.SetExternalUI(handler, logEverything);
DTF.Installer.EnableLog(logEverything, logPath, true, true);
DTF.Installer.InstallProduct(installerPath, commandLine);
This has the effect of writing an enormous number of events to the log file.
For example, I'm seeing thousands of these:
MSI (s) (14:A0) [11:33:50:764]: Component: comp_27E5179987044690962CE98B3F95FD72; Installed: Local; Request: Null; Action: Null; Client State: Local
MSI (c) (4C:8C) [11:34:17:869]: Creating MSIHANDLE (592) of type 790531 for thread 8076
MSI (c) (4C:8C) [11:34:17:893]: Closing MSIHANDLE (592) of type 790531 for thread 8076
How do I disable those extremely verbose messages in the log? I need to keep the Progress events.
If you don't want them, don't set the bts in the API call. Just set progress. However, you do need to get hold of the error messages and warnings to display them.
However....what's your goal here? You don't need to re-invent the logging that you can get in other ways. The purpose of using that external UI call API is that you are now in charge of all the UI for the install. This isn't really about logging, it's about you being responsible for the UI, and a standard install will typically show all those messages in one form or another. For example, along with progress messages you get action messages that says what's going on (file name being copied, etc). If that is an actual product that you are installing, then you really need to show error messages, files in use dialogs, warnings, or you're simply hiding everything that goes on.
Link to underlying AP docs: https://msdn.microsoft.com/en-us/library/aa370573(v=vs.85).aspx

SSIS package fails and then runs successfully 15 minutes later

I have an SSIS package that is scheduled to run every weekday morning at 8:15. It copies data to and from Active Directory and SQL. About two weeks ago, it started failing, with no changes having been made to the server (beyond MS updates).
The funny thing is that if I then immediately run the package again, it succeeds. Here is the error text from when it fails:
Date 7/14/2011 8:15:00 AM
Log Job History (Reference: Active Directory)
Step ID 1
Server MMCI-GD1SQL2
Job Name Reference: Active Directory
Step Name Run Package
Duration 00:00:32
Sql Severity 0
Sql Message ID 0
Operator Emailed
Operator Net sent
Operator Paged
Retries Attempted 0
Message
Executed as user: MMCI\service-sql. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved.
Started: 8:15:00 AM Error: 2011-07-14 08:15:31.88
Code: 0xC0047062
Source: Synchronize Permissions Active Directory Permissions [133]
Description: System.DirectoryServices.AccountManagement.PrincipalOperationException: There is no such object on the server. ---> System.DirectoryServices.DirectoryServicesCOMException (0x80072030): There is no such object on the server.
at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
at System.DirectoryServices.DirectoryEntry.Bind()
at System.DirectoryServices.DirectoryEntry.RefreshCache()
at System.DirectoryServices.AccountManagement.ADStoreCtx.LoadDirectoryEntryAttributes(DirectoryEntry de)
--- End of inner exception stack trace ---
at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e)
at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.PrimeOutput(Int32 outputs, Int32[] outputIDs, PipelineBuffer[] buffers)
at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPrimeOutput(IDTSManagedComponentWrapper100 wrapper, Int32 outputs, Int32[] outputIDs, IDTSBuffer100[] buffers, IntPtr ppBufferWirePacket) End Error Error: 2011-07-14 08:15:31.90
Code: 0xC0047038
Source: Synchronize Permissions SSIS.Pipeline
Description: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Active Directory Permissions" (133) returned error code 0x80131501. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 8:15:00 AM Finished: 8:15:31 AM Elapsed: 31.343 seconds. The package execution failed. The step failed.
Any thoughts?
Has some new Group Policy been applied that changed the permissions for the account your automated run uses, but which doesn't apply to your user id? I'm assuming when you say "I then ... run the package", you mean your logged-in user id.
Based on the error message that you had provided, the issue seems to be that the task within your package is trying to query an object in Active Directory that might no longer exist.
System.DirectoryServices.AccountManagement.PrincipalOperationException:
There is no such object on the server. --->
System.DirectoryServices.DirectoryServicesCOMException (0x80072030):
There is no such object on the server.
I could be wrong on the below part. I am just speculating what your package might be doing based on the description provided.
Since your package synchronizes data between SQL Server and Active Directory, I assume that the task named Synchronize Permissions Active Directory Permissions selects some form of data stored in SQL Server and updates the content in Active Directory or vice versa. If my assumption is correct, this task is probably Script Task or Script Component. I believe that the code inside this component is failing to select an object (group/user) in Active Directory.
I would check whether a group/user was deleted in Active Directory on the days prior to when the package failed to run.
Hope this helps.

Team Foundation/SQL/Silverlight creation of new team project failure Timeout

This is a rather convoluted problem, because we are setting up TFS with SQL Reporting running with SilverLight Integration. We followed the horrific path of set-up instructions that range across 3 different servers, and when we finished, we started getting the following error.
This error results from attempting to create a new team project within the project group.
Following its progress in the reports page, we can see it create the folders cleanly, but when it attempts to create the actual reports on the system, it times out. I've checked every other site I could find to try and figure out what went wrong, and nothing suggested has worked. Any help here would be greatly appreciated
Error/Stack Trace attached below:
2011-01-19T15:54:21 | Module: Engine | Thread: 6 | Running Task "" from Group ""
2011-01-19T15:54:24 | Module: Rosetta | Thread: 19 | Creating folder: /TfsReports/Boeing/admin/Bugs
2011-01-19T15:54:25 | Module: Rosetta | Thread: 19 | Creating folder: /TfsReports/Boeing/admin/Builds
2011-01-19T15:54:26 | Module: Rosetta | Thread: 19 | Creating folder: /TfsReports/Boeing/admin/Project Management
2011-01-19T15:54:27 | Module: Rosetta | Thread: 19 | Creating folder: /TfsReports/Boeing/admin/Tests
2011-01-19T15:54:29 | Module: Rosetta | Thread: 19 | Creating folder: /TfsReports/Boeing/admin/Dashboards
2011-01-19T15:54:30 | Module: Rosetta | Thread: 19 | Creating report: /TfsReports/Boeing/admin/Bugs/Bug Status
---begin Exception entry---
Time: 2011-01-19T15:59:30
Module: Engine
Event Description: TF30162: Task "Populate Reports" from Group "Reporting" failed
Exception Type: Microsoft.TeamFoundation.Client.PcwException
Exception Message: TF30225: Error uploading report 'Bug Status': The operation has timed out
Stack Trace:
at Microsoft.VisualStudio.TeamFoundation.RosettaReportUploader.Execute(ProjectCreationContext context, XmlNode taskXml)
at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.TaskExecutor.PerformTask(IProjectComponentCreator componentCreator, ProjectCreationContext context, XmlNode taskXml)
at Microsoft.VisualStudio.TeamFoundation.ProjectCreationEngine.RunTask(Object taskObj)
-- Inner Exception --
Exception Message: TF30225: Error uploading report 'Bug Status': The operation has timed out (type ReportingUploaderException)
Exception Stack Trace: at Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.UploadReport(XmlNode report)
at Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.HandleCreateReports(XmlNode node)
at Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.Run()
at Microsoft.VisualStudio.TeamFoundation.RosettaReportUploader.Execute(ProjectCreationContext context, XmlNode taskXml)
Inner Exception Details:
Exception Message: The operation has timed out (type WebException)
Exception Stack Trace: at System.Web.Services.Protocols.WebClientProtocol.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.HttpWebClientProtocol.GetWebResponse(WebRequest request)
at Microsoft.TeamFoundation.Client.TeamFoundationSoapProxy.GetWebResponse(WebRequest request)
at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
at Microsoft.TeamFoundation.Client.Reporting.ReportingService.CreateReport(String Report, String Parent, Boolean Overwrite, Byte[] Definition, Property[] Properties)
at Microsoft.TeamFoundation.Client.Reporting.ReportingUploader.UploadReport(XmlNode report)
--- end Exception entry ---
2011-01-19T15:59:31 | Module: Engine | Thread: 19 | TF30202: Task "" from Group "" will not be run because a prior task failed.
2011-01-19T15:59:31 | Module: Engine | Thread: 19 | TF30202: Task "SharePointPortal" from Group "Portal" will not be run because a prior task failed.
2011-01-19T15:59:31 | Module: Engine | Thread: 19 | TF30202: Task "" from Group "" will not be run because a prior task failed.
Denis Habib posted this solution to a similar problem. Perhaps you have the same problem
The problem is with uploading a report
to the report server. I think you
have the correct permissions since you
were able to create the site. The
problem may have to do with the
security settings on the datasources
(TfsOlapReportDS and TfsReportDS) as
these are the datasources for the
reports.
Please verify the following settings:
Navigate to the reporting site
(/Reports/Pages/Folder.aspx">http:///Reports/Pages/Folder.aspx),
click on the TfsOlapReportsDS and the
TfsReportDS and verify the connection
settings for each, specifically the
'Connect using:' section. This is
generally set to 'Credentials stored
securely in the report server' and a
valid username/password is specified.
Also, the 'Use as Windows credentials
when connection to the data source' is
checked.