Jaeger all-in-one with ElasticSearch - jaeger

I am prototyping the use of Jaeger in an ASP.NET Core (3.1) Web API using the Jaeger C# Client and I got it working with the All in One approach they mention in their Getting Started documentation. This works fine for initial prototyping but I also wanted to test with storing to an instance of ElasticSearch. Luckily, I found another Stack Overflow post about this which contains a docker-compose.yaml for deploying Elastic Search and all the Jaeger components and I got that working after a few tweaks to the slightly outdated docker-compose (details in my answer for that post).
However, while digging through the Jaeger documentation, I found the CLI Flags reference for the jaeger-all-in-one distribution that seems to contradict itself. First, it says
Jaeger all-in-one distribution with agent, collector and query. Use with caution this version by default uses only in-memory database.
But then it also proceeds to say
jaeger-all-in-one can be used with these storage backends:
and then lists jager-all-in-one distribution CLI Flag details for:
jaeger-all-in-one with cassandra
jaeger-all-in-one with elasticsearch
jaeger-all-in-one with memory
jaeger-all-in-one with badger
jaeger-all-in-one with grpc-plugin
So this implies that the Jaeger All in One distribution can be used with Elastic Search, etc. I am guessing the initial comment about the all-in-one distribution only supporting an in-memory database applies to the jaeger-all-in-one with memory option and not the others as otherwise it doesn't make sense.
Can someone with Jaeger experience clarify?

It's not clear in the documentation, but I managed to get it working by providing the SPAN_STORAGE_TYPE and the respective connection details to allow the jaeger components to talk to the storage running outside of the all-in-one container.
For instance, I'm running elasticsearch on my Mac, so I used the following command to run all-in-one:
docker run -d --name jaeger-es \
-e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
-e SPAN_STORAGE_TYPE=elasticsearch \
-e ES_SERVER_URLS=http://host.docker.internal:9200 \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 14250:14250 \
-p 9411:9411 \
jaegertracing/all-in-one:1.20

Related

duplicate kubernetes namespace with the content

How can I duplicate a namespace with all content with a new name in the same kubernetes cluster?
e.g. Duplicate default to my-namespace which will have the same content.
I'm interested just by services and deployments, so
when I try with method with kubectl get all and with api-resources i have error with services IP like :
Error from server (Invalid): Service "my-service" is invalid: spec.clusterIP: Invalid value: "10.108.14.29": provided IP is already allocated
As #coderanger mentioned in his answer, there is no straight way to make a copy of origin k8s resources to the separate namespace.
As was proposed, when you invoke kubectl get all command, k8s looks through resources catalog bounded to all category. Therefore, if you didn't add this category for each custom CRD object, throughout specific API group, you might probably miss some relevant k8s resources in the command output.
Furthermore, if you desire to export all k8s resources from the particular namespace, besides user workloads, I would recommend exploring API resources, filtering out only namespace scoped objects, and then apply bash processing to generate manifest files across each resource group:
kubectl api-resources --namespaced=true| awk '{print $1}'| sed '1d'| while read -r line; do echo "$(kubectl get $line -n namespace -o yaml > $line.yaml)"; done
You can also consider using Helm (as #David Maze pointed out in the comment), in order to manage user workloads through Helm Charts, as a more flexible and structured way to describe k8s native resources.
There is no specific way to do this. You could probably get close with something like kubectl get all -n sourcens -o yaml | sed -e 's/namespace: sourcens/namespace: destns/' | kubectl apply -f - but get all is always a bit wonky and this could easily miss weird edge cases.
You can backup your namespace using Velero and then you can restore it to another namespace or cluster!

Need Persistent key value store which can be accessed via http

I am looking for a persistent key DB which can be accessed via HTTP. I need to use it for storing postman test script data. I have heard of rocksdb and leveldb, but I am not sure whether they can be accessed via HTTP.
leveldb and rocksdb don't have a network component.
I created a small python project that does expose a document datastore like API that you can query using REST. Have a look at it https://github.com/amirouche/deuspy. It rely on leveldb for persistence.
There is a python asyncio client. You can create a client on your own it's very easy.
To get started, you can simply do the following:
pip3 install deuspy
python3 -m deuspy.server
And then start querying.
Here is an example curl-based session:
$ curl -X GET http://localhost:9990
{}
$ curl -X POST --data '{"héllo": "world"}' http://localhost:9990
3252169150753703489
$ $ curl -X GET http://localhost:9990/3252169150753703489
{"h\u00e9llo": "world"}
You can also filter documents. Look at how is implemented the asyncio client.
Take a look at Webdis which provides HTTP REST API access to Redis key value store. Redis has very good performance and scalability.

CLI command for Sonarqube Upgrade browser step

https://docs.sonarqube.org/display/SONAR/Upgrading
I am just going through this documentation to upgrade Sonarqube.
One of the steps is to open the URL in browser and follow instructions.
Is there any CLI command available for this step? So that I can automate this step in my upgrade automation?
Most (or even all?) UI interactions only trigger Web API calls.
In your case, api/system/migrate_db seems to serve your purpose.
From the api documentation:
Migrate the database to match the current version of SonarQube.
Sending a POST request to this URL starts the DB migration. It is
strongly advised to make a database backup before invoking this WS.
To call it from the command line use:
curl -s -u admin:admin -XPOST "localhost:9000/api/system/migrate_db"
curl is a linux command line tool for to communicate via HTTP
-s toggle "silent mode"
-u admin:admin provides authentication
-XPOST set's the HTTP method to POST (instead of default GET)

Solr 5.x.x - Passing credentials for a web crawl using bin/post?

I have a local instance of Solr 5.4.0 running, and I am attempting to crawl an HTTPS site which requires authentication using bin/post.
bin/post -c gettingstarted https://somesecureurl.com -recursive 2 -delay 1
I would like to pass my credentials with this request, but I cannot find any documentation for doing so. Is this possible? I have reviewed the following documentation:
http://lucene.apache.org/solr/quickstart.html
https://lucidworks.com/blog/2015/08/04/solr-5-new-binpost-utility/
https://cwiki.apache.org/confluence/display/solr/Post+Tool
bin/post -h
If it is not possible, can someone propose an alternate solution?

Move Amazon EC2 AMIs between regions via web-interface?

Any easy way to move and custom AMI image between regions? (tokyo -> singapore)
I know you can mess up with API and S3 to get it done, but there there any easier way to do it?
As of December, 2012, Amazon now supports migrating an AMI to another region through the UI tool (Amazon Management Console). See their documentation here
So, how I've done it is..
From the AMI find out the Snapshot-ID and how it is attached (e.g. /dev/sda1)
Select the Snapshot, click "Copy", set Destination region and make the copy (takes a while!)
Select the new Snapshot, click "Create Image"
Architecture: (choose 32 or 64 bit)
Name/Description: (give it one)
Kernel ID: when migrating a Linux AMI, if you choose "default" it may fail. What worked for me was to go to the Amazon Kernels listing here to find the kernels Amazon supports, then specify it when creating the image)
Root Device Name: /dev/sda1
Click "Yes, Create"
4.Launch an instance from the new AMI and test that you can connect.
You can do it using Eric's post:
http://alestic.com/2010/10/ec2-ami-copy
The following assumes your AWS Console utilities are installed in /opt/aws/bin/, JAVA_HOME=/usr and you are running i386 architecture, otherwise replace with x86_64.
1) Run a live snapshot, where you believe your image can fit in 1.5GB and you have that to spare in /mnt (check running df)
/opt/aws/bin/ec2-bundle-vol -d /mnt -k /home/ec2-user/.ec2/pk-XXX.pem -c /home/ec2-user/.ec2/cert-XXX.pem -u 123456789 -r i386 -s 1500
2) Upload to current region's S3 bucket
/opt/aws/bin/ec2-upload-bundle -b S3_BUCKET -m /mnt/image.manifest.xml -a abcxyz -s SUPERSECRET
3) Transfer the image to EU S3 bucket
/opt/aws/bin/ec2-migrate-image -K /home/ec2-user/.ec2/pk-XXX.pem -C /home/ec2-user/.ec2/cert-XXX.pem -o abcxyz -w SUPERSECRET --bucket S3_BUCKET_US --destination-bucket S3_BUCKET_EU --manifest image.manifest.xml --location EU
4) Register your AMI so you can fire up the instance in Ireland
/opt/aws/bin/ec2-register –K /home/ec2-user/.ec2/pk-XXX.pem –C /home/ec2-user/.ec2/cert-XXX.pem http://s3.amazonaws.com:80/S3_BUCKET/image.manifest.xml --region eu-west-1 -name DEVICENAME -a i386 --kernel aki-xxx
There are API tools for this. http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-MigrateImage.html
I think that is now outdated by ec2-bundle-vol and ec2-migrate-image, BTW you can also take a look at this Perl script by Lincoln D. Stein:
http://search.cpan.org/~lds/VM-EC2/bin/migrate-ebs-image.pl
Usage:
$ migrate-ebs-image.pl --from us-east-1 --to ap-southeast-1 ami-123456
Amazon have just announced support for this functionality in this blog post. Note that the answer from dmohr relates to copying EBSs, not AMIs.
In case the blog post is unavailable, quoting the relevant parts:
To use AMI Copy, simply select the AMI to be copied from within the
AWS Management Console, choose the destination region, and start the
copy. AMI Copy can also be accessed via the EC2 Command Line
Interface or EC2 API as described in the EC2 User’s Guide. Once the
copy is complete, the new AMI can be used to launch new EC2 instances
in the destination region.
AWS now supports the copy of an EBS snapshot to another region via UI/CLI/API. You can copy the snapshot and then make an AMI from it. Direct AMI copy is coming - from AWS:
"We also plan to launch Amazon Machine Image (AMI) Copy as a follow-up
to this feature, which will enable you to copy both public and
custom-created AMIs across regions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-copy-snapshot.html?ref_=pe_2170_27415460
Ylastic allows you to move EBS backed linux images between regions.
Its $25 or $50 per month but it looks like you can evaluate it for a week.
I just did this using a script on CloudyScripts, worked fantastically: https://cloudyscripts.com/tool/show/5 (and it's free).
As of 2017, it's pretty simple.. just follow the screenshots:
I'll add Scalr to the list of tools you can use (Disclaimer: I work there). Within Scalr, you can create your own AMI (we call them roles). Once your role is ready, you just have to choose where you want to deploy it (so in any regions).
Scalr is open-source released under the Apache 2 license: you can to download it and install it yourself. Otherwise, it is also available through a hosted version including support. Alternatives to Scalr includes RightScale and enStratus.