How to download traces from Jaeger? - jaeger

In the Jaeger UI http://localhost:16686/search, there is an option to upload JSON files for traces.
I wonder can we download the traces from Jaeger itself and use them in the future for finding performance issues?
How can we do that, I see no option to download traces from Jaeger Ui.

When you open an individual trace in the Jaeger UI, there is a View dropdown in the top right corner. One of the options is to view/download the given trace as a JSON file.
You can also programmatically query the Jaeger query service via JSON/Protobuf API, but those endpoints will not result in a data format that you can load back into the UI.
https://www.jaegertracing.io/docs/latest/apis/#trace-retrieval-apis

See the answer on this jaeger issue, you will need to query elastic search or the source where the data is stored.
Alternatively, you should raise an issue on jaeger-ui detailing your case.

Related

Azure App Insights, is there a way to query for thread count details?

this question is mostly for DevOps experts, in app insights.
So I found I have an issue on my app, it seems some threads are being created and not released, causing the thread count to increase and ending at some point in the "CGI error", which usually happens when you exceed your quota in any resource.
I already identified the exceeded resource is thread count thanks to this Metrics option, which gives you a graphical representation on how it is being consumed (and released when an app restart happens)
I would like to have some details on this, not the grouped information but the actual information that is giving this graph, any lead would help me to understand which place is creating and not releasing threads, a namespace, a class name, anything.
Is there another place where I could get this information in a very detailed way? AppInsight queries seems to lack this metric.
Thanks in advance.
AFAIK there is no direct way to do this. The only way that I can see is by adding custom logging inside your application and sending the logs to a Log Analytics Workspace.
Inside your function app in the portal go to 'Diagnostic settings' and connect to your log analytics workspace (if it doesn't exist create one).
Inside the log analytics workspace you will find your custom logs either under a 'Custom Logs' tab or under 'Application Insights' tab, after this find the correct field and parse, something like:
customMetrics
| extend d=parse_json(customDimensions)
| extend processSessionId=d.processSessionId
For Azure related topics there is also a decent Q&A platform here:
https://learn.microsoft.com/en-us/answers/products/azure?product=all
For KSQL this is a handy page:
https://learn.microsoft.com/en-us/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor
Hope this helps somewhat

Connecting to an API using Power Bi

I am trying to connect to an API using Power BI. I am doing this by clicking on the Get Data then choosing Web, typing the URL address of the site. It comes with the list of API links created when I drill to any of them I cannot see anything. Is this the correct way of doing it? Many Thanks for your help
For querying REST APIs, that's the correct way of doing it yes. But in your case, it might be an issue with the API itself or that you need more details in the query? It's hard to troubleshoot without vieweing the API documentations.
If you then press on Advanced, you can then include further details for your request if needed.
I will include some screenshots below of how to choose the Web connector.

does Jaeger provide a trace api

does jaeger provide a way of querying the trace data without using the UI provided. I'm aware that zipkin provides an API to directly access the trace data etc.
Use-case: i'm trying to use the trace data to pull together a custom report for internal purposes. I could scrape the data from the UI but wondered if there was an easier way.
Your best chance is to get the data from whatever storage the Jaeger collector is using (Cassandra, Elastic.) (https://www.jaegertracing.io/docs/1.6/deployment/ )
My suggestion is to store in Elastic and use Kibana to accomplish what you need.
Old topic but the current version of Jaeger Query UI is a single page app and has an underlying API that allows the same queries capabilities as the UI.

Meteor Amazon S3 image upload with thumbnails

I'm using Meteor and would like to create a form with an image upload field that saves the uploaded file to an Amazon S3 bucket in its original size as well as multiple thumbnails sizes defined (passed) via the code.
So far I'm using the lepozepo:s3 package which works great but doesn't seem to allow options for generating additional thumbnails.
Given I can upload the original files onto S3 I'm considering looking into a service on Amazon that can generate the desired thumbnails and then notify my Meteor app. But I'm not sure how to achieve that.
Can anyone point me in the right direction or share some insight into the best approach for this?
PS: I want to avoid using Filepicker.io is possible.
Seems I was following the wrong path. CollectionFS has everything I need and more. I now have this working with plenty of scope to do more later. This is one brilliant collection of packages with clear guides on respective Github pages.
Here are the packages I ended up usings:
cfs:standard-packages - base
cfs:gridfs - required for some reason, not sure why
cfs:graphicsmagick - thumbnailing/cropping
cfs:s3 - S3 upload
Code sample →
CollectionFS is now deprecated, but there are other options:
Only upload, without S3 integration*: https://github.com/tomitrescak/meteor-uploads
Use the jQuery-File-Upload (which is great), it generates thumbs, has size and format validation, etc. Using basically these two packages together:
https://atmospherejs.com/tomi/upload-jquery
https://atmospherejs.com/tomi/upload-server
You can use other package for S3 integration.
Like: https://github.com/peerlibrary/meteor-aws-sdk/
Upload + Integration with S3: https://github.com/Lepozepo/S3
Good, but if you need to generate thumbs for example you will need to integrate with other package or do it yourself. I not tested, but I got this suggestion: https://github.com/jamgold/cropuploader
Upload only, but with examples of how to generate thumbs or integrate with S3 / DropBox / GridFS /: https://github.com/VeliovGroup/Meteor-Files/
Rich documentation and does well which proposes: Upload images.
Use that adapt best to your needs.
look at blueimp's "jquery file upload" for client and image server resizing. On client you have a bit limited possibilities quality wise, on server you can use full power of imagemagick. Or look at my blog post on http://doctorllama.wordpress.com for file uploads for meteor in general.
cfs:gridfs - required for some reason, not sure why
Meteor using gridfs to store file chunks inside mongo database. In case of s3 it's for temporary storage.

Possible to get image from Amazon S3 but create it if it doesn't exist

I'm not sure how to word the question but here is what I am looking to do.
I have a site that uses custom map tile overlays on a google map.
The javascript calls a php file on my server that checks to see if an existing map tile exists for the given x, y, and zoom level.
If if exists, it displays that image using file_get_contents.
If it doesn't exist, it creates the new tile then displays it.
I would like to utilize Amazon S3 store and serve the images since there could end being a lot of them and my server is slow. If I have my script check to see if the image exists on amazon and then display it, I am guessing I am not getting the benefits of the speed and Amazons CDN. Is there a way to do this?
Or is there a way to try and pull the file from Amazon first then set up something on Amazon to redirect to my script if the files no there?
Maybe host the script on another of Amazons services? The tile generation is quite slow also in some cases.
Thanks
Ideas:
1 - Use CloudFront, but point it to a cluster of tile generation machines. This way, you can generate the tiles on demand, and any future requests are served right from Cloudfront.
2 - Use CloudFront, but back with with an S3 store of generated tiles. Turn on logging for the S3 bucket, so you can detect failed requests. Consume those logs on a schedule, and generate the missing tiles. This results in a cheaper way of generating tiles, but means that when a tile fails the user get's nothing.
3 - Just pre-generate all the tiles. Throw tasks in an SQS queue, then spin up a collection of EC2 instances to generate the tiles. This will cost the most up front, but all users get a fast experience.
I've written a blog post with a strategy for dealing with this. It's designed to make intelligent and thrifty use of CloudFront, maximize caching and deal with new versions of existing images. You may find the technique described there helpful. The example code shows how to handle different dimensions (i.e. thumbnails) of images. You could modify it to handle different zoom levels.
I need to update that post to support CloudFront custom origins, and I think that for your application you might be better off skipping S3 and using a custom origin. The advantage of a custom origin is simply that it's probably going to be easier to manage all of your images on your local filesystem compared to managing them on S3.