I am trying to capture the following PerformanceCounters on the Azure WebRole:
private string[] perfCounters = { #"\Processor(_Total)\% Processor Time",
#"\ASP.NET Applications(__Total__)\Requests/Sec",
#"\Memory\Available Bytes",
#"\ASP.NET\Request Execution Time",
#"\ASP.NET\Requests Queued"};
I have in my WebRole.cs the following code to enable capturing of these perf counters as this:
DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();
int loggingInterval = Int32.Parse(RoleEnvironment.GetConfigurationSettingValue("loggingInterval"));
config.Logs.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(loggingInterval);
foreach (String s in perCounters)
{
PerformanceCounterConfiguration procTimeConfig = new PerformanceCounterConfiguration();
procTimeConfig.CounterSpecifier = s;
procTimeConfig.SampleRate = System.TimeSpan.FromMinutes(1.0);
config.PerformanceCounters.DataSources.Add(procTimeConfig);
}
config.PerformanceCounters.ScheduledTransferPeriod = System.TimeSpan.FromMinutes(1.0);
DiagnosticMonitor.Start("DiagnosticsConnectionString", config);
As you see, I am setting the scheduled xfer period of perf counters to 1 min.
Now, I am able to get these counters in the WADPerformanceCounters table on my dev fabric, but I am not able to get them on the azure cloud? Can anyone point out what could I be doing wrong here?
Kapil
The problem supposedly was not at the places I was looking at. The fix for this was pretty simple, I deleted the pre-existing deployment and uploaded my cspkg file as a fresh deployment. It seems that th perf counters are picked up based on an xml file under the wad-control-container blob. This xml file is made for each deployment. I realized that the xml file was not getting updated in my case, and when I deleted the deployment and created a new deployment, it was taking the fresh value.
Thanks
Kapil
Any changes to the diagnostic settings get updated only when a full deployment is performed and not on an update.
In order to perform a full deployment, go to the Publish Profile, and Settings, and Advanced Settings, and uncheck the checkbox for Deployment Update. When you publish this, it will be a full deployment.
Also, it is possible to update your settings without performing a deployment.
In Server Explorer, go to Windows Azure=> Cloud Services => => Production => Your worker role and right click on it, Click on Update Diagnostic Settings. This will fetch the actually deployed current Diagnostic settings, and you can update them here without performing any deployment.
Of course if you want to verify that your code is actually setting it the right way, then you will need to do a full deployment as mentioned above that will exercise your code and then verify.
http://msdn.microsoft.com/library/azure/dn186185.aspx
Since you're getting counters in Dev Fabric but not in Azure Fabric, let me ask the obvious: Did you change your DiagnosticsConnectionString setting to reference your Azure Storage connection string?
Related
I realise normally a debug run is not visible in the data factory v2 UI after closing the browser window, however unfortunately I needed to restart my machine unexpectedly and it's a long running pipeline.
I thought maybe the runs might be available via powershell, but I haven't had any luck.
The pipeline is likely still running.
We do have external logging, however ideally I'd like to see how long each activity is taking as I'm load testing.
And more importantly I do not want to do another run until I'm sure it's finished.... notably I'll run it from a trigger next time (just in case!).
EDIT:
It looks like a sandbox id is used which is stored in the browser local storage and there appears to be undocumented API endpoints for gathering info using the sandbox id. But there doesn't appear to be a way of getting old sandbox id's so I'm probably out of luck.
There is a button for view all debug runs.
Taken from Microsoft documentation:
To view a historical view of debug runs or see a list of all active debug runs, you can go into the Monitor experience.
I created Feature class in enterprise geodatabase (SQLServer2014 express). Feature class is sync enabled and published successfully.
Now I can not generate offline geodatabase from Arcgis Android SDk.
I can see ' Create Replica ' from 'Supported Operations' from 'http://xyz:6080/arcgis/rest/services/MyFeature/FeatureServer'
I tried 'http://xyz:6080/arcgis/rest/services/MyFeature/FeatureServer/createReplica' rest api from feature service. it creates job but no results shown.
Server logs show following error
Error executing tool.: ErrorMsg#SyncGPService:{"code":400,"description":""} Failed to execute (Create Feature Service Replica).
Log source is 'System/SyncTools.GPServer'
First, make sure that there's nothing needed at the DB level where your data is stored. Taking the server out of the equation, can you run the Create Replica tool in ArcMap/ArcGIS Pro against the data source, and does it succeed? If that works (and other operations like Adds, Updates, Deletes etc.), then put ArcGIS Server back in the equation.
What are your ArcGIS Server log levels set at? It may be beneficial to up the logging level to Verbose or Debug, try to create the replica again, and consult the logs to see if more helpful information is returned.
You may also want to check and see if your version of ArcGIS Server needs to be patched. For example, at 10.5.1 there was a patch released specifically for Sync issues.
If all else fails, Esri Support may be a good place to find some help as well.
Have you looked at the requirements for making your data available for offline use? See this link in the ArcGIS Server documentation.
Specifically you need to enable archiving and include Global IDs on the dataset, but there are more details at the above link.
For future reference, and in case that suggestion doesn't work, the Esri GeoNet ArcGIS Enterprise place is a good spot to ask these questions.
We recently upgraded from TFS 2010 to TFS 2015. Everything appears to be fine post-upgrade, but we are getting the error "The item is locked in workspace (null);(null)." on some source control files. It looks like we have some orphaned locks that need to be tracked down and cleaned up, but the tbl_lock database table is not on the database, so the following select query won't work:
select * FROM tbl_Lock l
LEFT JOIN tbl_PendingChange pc
ON l.PendingChangeId = pc.PendingChangeId
WHERE pc.PendingChangeId IS NULL
Does anyone know how to detect and remove these locks in TFS 2015?
I also installed the TFS power tools, and neither Visual Studio 2015 nor the power tools are picking up the locks.
Updated:
BTW, when I run the SELECT query to find out where PendingChangeId is NULL, I get back no rows. I think the trick is the LEFT JOIN. PendingChangeId would be NULL when tbl_Lock also had no record for the PendingChangeId on tbl_PendingChange (and thus the lock was orphaned). So I'd still need to know where the PendingChangeId should normally be joined to in TFS 2015, to identify which files have a lock that is bad. (Or where a workspace no longer exists, which may be another possible source for the issue.)
And I also still need to know how to clean up those bad locks. I'd prefer to do this using the tools, either via the GUI or the command line, but could also do this programmatically either using the API or the TFS Object Model files for TFS 2015.
I really would rather only touch the database directly as a last ditch resort. And I would also rather use tf vc destroy on the item as a last ditch resort as well, since that would wipe out all history on the files.
Update 2
Aha! I think I found a way to identify the files, and it looks like my thinking for what happened may be correct. Unfortunately, I had to probe the database using a READ UNCOMMITTED query to find the information. I couldn't get at this information programmatically or using the tools. (They all showed or acted like the file is not checked out.) The query that I used on TFS 2015 was:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
This returned the three files that have the (null);(null) lock on our database, because the WorkspaceId listed on tbl_PendingChange does not exist anymore on tbl_Workspace.
How did this happen? Our CI server uses temporary TFS workspaces. I think what happened after the upgrade is that our CI server went to check out the file and apply an update to it. (For example, to increment version numbers as part of the build process.) It checked out the file, but failed to apply the update. (Our tools like working with Server workspaces, but it may have ended up with a Local workspace and thus the file was still checked in Local, but checked out on the Server. Thus the change to the file couldn't be applied.) The code that we are using performs a workspace.Delete operation when the process completes, so the workspace was deleted - even though the workspace still had the file checked out! So this created an orphan record on tbl_PendingChange that isn't linked to any Workspace, and thus the file is still locked with pending changes. But the GUI and tools aren't seeing it as such, because they're not realizing the pending change's workspace is non-existent.
So this brings me back around to how do I fix this? If someone knows of a way to get at these orphaned pending changes, I'd appreciate it. I tried using:
TfsTeamProjectCollection tfsTeamProjectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(szProjectUri));
VersionControlServer versionControlServer = tfsTeamProjectCollection.GetService<VersionControlServer>();
string[] items = new[] { ... server item path ... };
PendingSet[] queryPendingSets = versionControlServer.QueryPendingSets(items, RecursionType.None, null, null);
PendingSet[] getPendingSets = versionControlServer.GetPendingSets(items, RecursionType.None);
but these aren't finding the orphans.
Update 3
I finally installed Team Foundation Sidekicks 2015 and gave it a try - status tool specifically, but then other tools. It's finding pending changes, but not the orphaned ones.
You can use Team Foundation Sidekicks to search and undo lock by following steps:
Install the tool and launch it.
Select TFS server to connect.
Select "Tools\Status Sidekick".
Set the "Search criteria" for the information you want.
Click "Search" button.
Select the locked file and click "Unlock lock" button.
You can using below command to undo the pending changes:
tf undo "file_path" /workspace:workspace_name
Or you can just use below command to delete the old workspace
tf workspace /delete /server:your_tfs_server workspace;username
From Visual Studio 2015 GUIļ¼
File -> Source Control -> Advanced -> Workspaces...
In the dialog that came up, check "Show remote workspaces" and the locked workspace came up in the window. Then selected it and click "Remove".
Details about it, please check this blog and more ways to resolve this you can refer the similar question: What do you do if the file in TFS is locked by someone else?
Update:
According to the sql query. It's looking for .PendingChangeId IS NULL . You can use the similarly tbl_PendingChange under collection database. However, it's not a commendatory method. Since operate directly in the TFS database is not recommended.
The following command has cleared up the pending changesets that were orphaned:
tf vc destroy <itemspec> /startcleanup
After running this command, the file was able to be added back to TFS, and the file could be checked in and out and edited as normal. Running the query:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
also showed that the pending changeset record related to this file was gone as well.
Microsoft's documentation on this command can be found at https://msdn.microsoft.com/en-us/library/bb386005.aspx. Before using this command, you should review the documentation carefully and be sure to understand the consequences of using it.
Because this command permanently removes files and potenally all history from TFS - and does so recursively - you need to take precautions and be absolutely certain that you are targeting the command correctly. So before using this command, I would recommend taking the following additional precautions:
Stop all user and external accesses to TFS and any other software that may be running from the machine.
Make sure to run a full backup of TFS and any other databases located on the machine.
If you can, take a snapshot in time of the server.
That way if something goes horribly wrong, you will have one or more points to fall back on.
I have installed and configured SSRS using SharePoint integrated deployment mode and have been able to successfully run a report from SharePoint. I created a custom deployment application that will upload all reports and datasets as well as create all data sources and make the proper connections between them when necessary.
I have one report that failed and I need to manually mess with the reports connection to a data source but I found that the drop down does not contain the options to let me manage its shared data sources (see example below).
In this image you can see the option that I am missing. Please excuse the colors, this is the best image I could find online in a pinch.
This is only happening in one environment so there must be a configuration change I am not thinking of to show these options. Here are the things I have already checked:
The account I am using is in the sites Owners group and has full control of everything, including the report file.
The item is being uploaded as a Document content type for some reason, but I edited properties and changed that to Report Builder Report content type.
The Report Server Integration site collection feature has been activated.
All of the Reporting Service content types have been added to the list.
I would revert to deployment from BIDS to debug this issue. It will perform some validation during that process and possibly return meaningful errors.
So this turned out to be caused by one of our customizations. We had an old custom javascript function that was named the same as a SharePoint javascript function that has something to do with those drop down actions. Hope this helps someone else.
We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated.
It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working.
We have enabled it like this...
bool UseLocalCache =
int LocalCacheObjectCount = int.MaxValue;
TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3);
DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased;
if (UseLocalCache)
{
configuration.LocalCacheProperties =
new DataCacheLocalCacheProperties(
LocalCacheObjectCount,
LocalCacheDefaultTimeout,
LocalCacheInvalidationPolicy
);
// configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300));
}
Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing).
The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache.
Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly.
We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either.
I cant find anything in SO or Google.
Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think?
(note: the IIS box running ASp.net isnt in yhe cluster - it is just the client).
Any insights greatfully received!
Which DataCache methods are you using to read from the cache? Several of the DataCache methods will always make a hit against the server regardless of local cache being configured. You pretty much have to make sure you only use Get if you want the local cache to be leveraged.
This is one my biggest nits with AppFabric Caching. They don't explain any of this to you and so when you begin to rely on local caching you begin to fall into these little pitfalls because you do not think you're paying a penalty for talking to the service, transferring data over the wire and deserializing objects, but you are.
The worst thing is, I could understand having to talk to the service to make sure the local cache represents the latest data. I can even understand transferring the data back so that multiple calls are not made. What I can not understand for the life of me though is that even if the instance in the local cache is verified to still be the current version that came back from the cache, they still deserialize the object from the wire rather than just returning the instance that's in memory already. If your objects are large/complex this can hurt a lot.
After days and days of looking into why we get so many Local Cache misses we finally solved it:
There is a bug with local cache in AppFabric v 1.1 that is fixed in CU4, see http://support2.microsoft.com/kb/2800726/en-us
Make sure that the Microsoft.ApplicationServer.Caching.Client.dll used by your application is also updated. We had CU4 installed on the machine but got the Client.dll without CU4 from a NuGet package in our application. In our case a simple NuGet package update made everything work.
After installing CU4 and making sure that the Client.dll was also updated we reduced our reads towards the AppFabric Host by a lot, due to Local Cache hits increasing. yay!
Have you tried using a nhibernate profiler? http://nhprof.com/
There is also this:
http://mdcadmintool.codeplex.com/
It's a nice way to manage and view the cache.
Both of these may help in debugging the issue.