Azure App Insights, is there a way to query for thread count details? - cgi

this question is mostly for DevOps experts, in app insights.
So I found I have an issue on my app, it seems some threads are being created and not released, causing the thread count to increase and ending at some point in the "CGI error", which usually happens when you exceed your quota in any resource.
I already identified the exceeded resource is thread count thanks to this Metrics option, which gives you a graphical representation on how it is being consumed (and released when an app restart happens)
I would like to have some details on this, not the grouped information but the actual information that is giving this graph, any lead would help me to understand which place is creating and not releasing threads, a namespace, a class name, anything.
Is there another place where I could get this information in a very detailed way? AppInsight queries seems to lack this metric.
Thanks in advance.

AFAIK there is no direct way to do this. The only way that I can see is by adding custom logging inside your application and sending the logs to a Log Analytics Workspace.
Inside your function app in the portal go to 'Diagnostic settings' and connect to your log analytics workspace (if it doesn't exist create one).
Inside the log analytics workspace you will find your custom logs either under a 'Custom Logs' tab or under 'Application Insights' tab, after this find the correct field and parse, something like:
customMetrics
| extend d=parse_json(customDimensions)
| extend processSessionId=d.processSessionId
For Azure related topics there is also a decent Q&A platform here:
https://learn.microsoft.com/en-us/answers/products/azure?product=all
For KSQL this is a handy page:
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor
Hope this helps somewhat

Related

CRUD with single drag/drop or other action via API?

This is my first post/question. If I missed an existing thread that answers my question, I missed that thread in my search and definitely appreciate you linking me! Please let me know if I should be posting/asking this elsewhere....
My question relates to Salesforce.
I have a use case where a client has a monthly batch of files that need to be made available on various cloud-based storage/distribution platforms like Box and Dropbox but also other less ubiquitous tools specific to the sector. Currently, the client is logging into each distribution platform, one-at-a-time, and uploading the files; then, if at any point any files need to be updated or removed/restricted, the client logs into each platform one-by-one and takes the necessary action. Obviously the process being described is tedious/laborious and leaves multiple gaps for error. The client and I are discussing a solution that would allow for create/read/update/delete actions in all of the distribution platforms without having to leave their Salesforce org. I am aware of existing AppExchange integrations for Box, Dropbox, etc. but they don’t quite do everything we need (to my knowledge)—they tie-in nicely and there are use cases where they are powerful tools...but—my understanding of those existing integration is that they would still each require dedicated tabs within the Object and repeated ‘drags’ and ‘drops’ of the same files to each tab. Again, the end goal here is that, for example, the client wants to drag and drop one time and have it pushed to the various platforms, etc. Or another example is they would like to choose "delete" one time from within Salesforce and have the file removed/restricted on all distribution platforms.
I am a certified SF Admin 1, so...perhaps this should be in my wheelhouse but...I feel unsure how to approach. My feeling is this is asking for a combination of integrations via API and Process/Flow work, but I am hoping for some ideas/input/guidance. Any insight or help any of you have to offer would be so greatly appreciated!!
Thanks so much!

Is it possible to avoid collecting specific data in Sentry?

when I’m in a sentry issue description page I can see some information collected by the sentry service and I’d like to avoid collecting them to avoid privacy issues.
The information that I’d like to not see are: app.device and user id as you can see here:
Is it possible? I’m concern about new apple privacy restrictions. I don't know if I understood them correctly, but it is necessary to explain to the user, using a pop up or something similar, that the app is using a third party software to collect data about "app crashing" and "app performance". Giving to the user the possibility to choose to not collect those data would bring to developers a lot of headaches.
I searched in all project settings and documentation but I found only a way to hide certain tags/data but the point is not hiding information, but not collecting them at all.
Thanks
The 'user.id' that Sentry creates is not an identifier that can be used to track the user across apps or devices. It's a random id created when the app runs for the first time and it's sent with all errors that happens.
The sole goal of this ID is to give the developer an idea of how many different users are affected by an issue. The developer (owner of the app) doesn't know exactly who the user is and if that same users reinstalls the app, a new id is generated so technically Sentry would report all new errors as a new user. Which is fine given the goal is to give an approximation of impact of an issue.
Developers might focus on issues that affect more customer than not.
That said, you can strip data in many ways. Through the SDK or in Sentry itself.
If you drop data in Sentry, that is done before the event is written to disk.
Sentry's documentation talks about Scrubbing Sensitive Data here.
Doing it on the SDK side, for example for React Native, you could do:
Sentry.init({
dsn: "https://examplePublicKey#o0.ingest.sentry.io/0",
beforeSend(event) {
// Modify the event here
if (event.user) {
// Don't send user id
delete event.user.id;
}
return event;
},
});
There's also a page talking about Data Privacy in the context of Google and Apple:
https://docs.sentry.io/product/security/mobile-privacy/

How to start a VM instance using Cloud Scheduler

Background and Goal
I have a Debian/Linux VM on GCP which I manually start every morning and after it runs, it shuts down by itself using a Linux command. I want to automate the start of the VM by using the Cloud Scheduler. The question asked in GCP auto shutdown and startup using Google Cloud Schedulers has several answers and I am interested in pursuing the answer (https://stackoverflow.com/a/65062924/10322004) proposed by #nikelone because it seems to be simple and also it has been endorsed by #Damien and #RayFoss as being easy. I am a neophyte in these matters and I could not comprehend their replies fully. So this post was created to elicit more clear answers for a person like me.
What I have tried
I have gone to https://cloud.google.com/compute/docs/reference/rest/v1/instances/start (call this page A) and tried the API and was able to successfully start my already stopped VM when I clicked on the execute button. I presume that this means that my entries were fine and can be used in conjunction with appropriate software like Cloud Scheduler to perform the start function on a predefined schedule. But the problem is that I do not know or understand how to proceed from here. I give below my questions.
My Questions
On page A, the last three paragraphs are titled Authorization Scopes, IAM permissions, and Examples, and none of them say anything specific about what the user should do. Is it correct to assume that they have nothing to do with the Cloud Scheduler, but related to other methods to achieve the same goal? If this is not correct then my next question is what should I be doing to follow the statements in these three paragraphs?
Assuming that the answer to question 1 is "yes", meaning I can now start scheduling with the Cloud Scheduler, I next looked at the quickstart for Cloud Scheduler at https://cloud.google.com/scheduler/docs/quickstart (call this page B). The list of items to do is quite large including installing Cloud SDK, running a quite a few commands on the console, enabling some features, set up Pub/Sub, create a job, run the job and verify the results in Pub/Sub. This looks like a daunting set of tasks and I could not understand why it is necessary to jump through the hoops to use something that has already been achieved with just a few keystrokes earlier. So are these steps all necessary? Or is there a way to use the Cloud Scheduler directly without going through so many intermediate steps?
Now assume that the answer to question 2 is that I have to perform all steps stated on page B. If I run into some problem while accomplishing the tasks outlined on page B, my VM may get messed up irretrievably. Is there a way in which the Cloud Platform or its components can be used to reset my VM to its current state as of today, which is working fine? I really do not want to end up with something worse than what I have now.
To answer your questions:
Auth Scopes and IAM permissions are required for you to call the Compute Engine API methods such as instance.start & instance.stop. You need to set the right scope and the right IAM permission on your job or else it will fail. They are indeed related to the method that you're interested to call so you must keep them in mind. What you see on the examples are the ways to call the {API} using different programming languages so you don't need to pay attention to them as you will create the job through the Cloud Console. To further address this part, see the full steps I included below.
The answer that you're trying to follow uses HTTP target while the quickstart you've linked uses Pub/Sub and they are different with each other because they have separate use cases. This link shows a proper instruction how to create a scheduler job with an HTTP target. You can create this kind of job straight from the Cloud Console or a one-liner gcloud command. If your config is incorrect, the trigger will not execute the endpoint URL and you will see an error that you must fix.
Addressed on answer #2
Basically, you just need to follow the instructions to the link you've sent. However, I'll post it here as well along with my explanation:
Go to https://cloud.google.com/scheduler. Click on Go to Console. Click on Create Job. Fill up the required fields (those with red asterisks) when creating a Scheduler Job.
Select HTTP as target type.
Enter this as your URL (modify the capitalized words).
https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/INSTANCE_ZONE/instances/INSTANCE_NAME/start
Choose HTTP method POST.
Click show more and choose Auth Header "Add OAuth Token"
Enter your service account. This is used to pass an OAuth Token when your scheduler job calls the Compute API. Make sure that the service account you will use have the "Compute Instance Admin" role because this role contains the permissions to start/stop your instance. See this instruction how to grant access on a service account. If you're not sure what service account to use, feel free to use the Compute Engine default service account.
Add this on Scope:
https://www.googleapis.com/auth/cloud-platform
The description of this scope:
See, edit, configure, and delete your Google Cloud Platform data.
Repeat for Stop instance job and change URL in #3.

Asana: Does API provide user activity logs?

Does the Asana API provide some method of getting user activity logs? I am interested in activity logs and login history logs.Logs could look e.g like "user created a task xyz", "user created a project".
I went through the documentation and could not find any such API/REST Endpoint. Does Asana keep such logs
in their system? If yes then is there a way to get them? If not then is it planned in a future release of the API?
(Asana dev here.)
This isn't something we currently provide. We're working on a system for getting semi-realtime updates to "subscriptions", but we were primarily thinking about subscribing to tasks, projects, workspaces, and so on. Subscribing to a user's activity would be an interesting use case, but one we haven't considered up until now, and one that might be a bit trickier.
Thanks for the feedback!

Error monitor for different applications

Currently we have many applications, where each application has its own error notification and reporting mechanism, so we clearly have many problems:
Lack of consistent error monitoring across different systems/applications: different GUIs, interfaces, different messages, etc.
Different approaches for error notification per application (many applications use email notifications, other applications publish messages to queue, etc.).
Separated configuration settings for reporting and monitoring per application: notification frequency, message recipients, etc.
You could add many other issues to the list, but the point is clear. Currently there is a plan to develop a custom application or service to provide a consistent and common solution for this situation.
Anyway, I am not sure if it is a good idea to create a custom application for this, I am sure that there should be a framework, platform or an existing solution or product (preferentially open source) that already solves this problem, so my question is: do you know what project or product to check before deciding to create our custom application?
Thanks!
Have a look at AlertGrid, it works as a centralized event handler, and notification dispatcher. It can collect events from different sources and you can easily manage event handling by creating rules in a visual editor. So, you can filter events, and raise notifications (email, sms, phone - works worldwide) whenever your custom condition is met. You can react not only to events that ocurred but also the ones that did not occur (detect missing 'heartbeats'). All you need is to feed AlertGrid with events (Signals), by a very simple API.
I'm not quite sure if this is what you're looking for. I'm in the AlertGrid dev team, if you had any questions - feel free to ask. We constantly develop this tool and appreciate any feedback.
Depending on how much information is written to application logs, you could consider using Hyperic. It's open source and has a lot of the features you are looking for.
http://www.hyperic.com/
Bugsnag is an awesome option if you are looking to do cross platform error monitoring. It supports some 50 platforms in one interface. https://blog.bugsnag.com/react-native-plus-code-push/