How to delete a monthly partition from VictoriaMetrics vmstore nodes - victoriametrics

We are backfilling old data into our VictoriaMetrics and Grafana setup. But we might have messed up the data on a month. We want to delete a monthly partition from vmstore nodes and start again. Is this possible through the APIs?

Yep, you can do this via VictoriaMetrics Delete API /api/v1/admin/tsdb/delete_series. Note, that you can only delete All of the data and restore/backfill ones again.
See examples below:
Single-node VictoriaMetrics
curl -v http://localhost:8428/api/v1/admin/tsdb/delete_series -d 'match[]=your_metric_name_here'
Cluster version of VictoriaMetrics
curl -v http://<vmselect>:8481/delete/0/prometheus/api/v1/admin/tsdb/delete_series -d 'match[]=your_metric_name_here'
VictoriaMetrics documentation has a guide "How to delete or replace metrics in VictoriaMetrics" with more explanation and examples.

The easiest and the fastest way to remove a month-worth partition of data in VictoriaMetrics is to stop VictoriaMetrics (or a vmstorage in cluster version), then delete the YYYY_MM directory for the needed month under the <-storageDataPath>/data/{small,big} directories and then start VictoriaMetrics again. See more details about storage in VictoriaMetrics in these docs.

Related

Is it possible to look up an eks cluster without knowing the region?

Ideally, I'd like to be able to do this with aws cli, but I'm open to alternatives. Assuming I'm authenticated to a particular aws account, is there any way to look up basic information about a cluster, or all clusters in the account, without knowing what region the cluster is in? I'd love a way to get information about a cluster without already knowing meta information about it. I could write a script to cycle through all regions looking for for clusters, but I hope there's a better way.
Here is a bash for loop that should do the trick:
for region in `aws ec2 describe-regions --output text | cut -f4`
do
echo -e "\nListing Clusters in region: $region..."
aws eks list-clusters --region $region --output text --no-cli-pager
done
A handy command is eksctl get cluster --all-regions -o json.

Slow Response from ABP Authentication

I have just downloaded and configured my first ABP solution and I'm having a performance problem.
I chose the option to have a separate site for IdentityServer. I configured a database and changed the ConnectionString entries in the appsettings.json files of the Hosts project, Migration project, and the IdentityServer project. I followed all the instructions in the getting started tutorial.
Everything (eventually) works but each time I try to authenticate myself either to the Swagger site or the Angular website, there is a significant (minutes-long) delay before I am either logged in or the request times out.
Suspected Problem:
So I read that the site uses a redis cache during login. I have never used this technology before. I had to get that installed.
I used the following commands to pull down the image and run it in Docker - another technology that I have not used before:
PS C:\WINDOWS\system32> docker pull redis
Using default tag: latest
latest: Pulling from library/redis
a330b6cecb98: Pull complete
14bfbab96d75: Pull complete
8b3e2d14a955: Pull complete
5da5e1b21a2f: Pull complete
6af3a5ca4596: Pull complete
4f9efe5b47a5: Pull complete
Digest: sha256:e595e79c05c7690f50ef0136acc9d932d65d8b2ce7915d26a68ca3fb41a7db61
Status: Downloaded newer image for redis:latest
docker.io/library/redis:latest
PS C:\WINDOWS\system32> docker run --name development9-redis -d redis
eee1a05c90e7a492a19eab025fe307b17040ba35ea2f3bc5fbd5df1bab372028
This appeared to do something, so I assume my cache is running and available. Am I missing something? Could a misconfiguration of redis be the cause of my performance problem?
Please ask me any relevant questions you'd like and I will describe my set up. Thanks.
As you've pointed out, your performance issue is probably related to the improper Redis configuration. It really helps to downgrade response time.
You need to check the Redis running on port 6379, and also check does it get requests.
You might find useful this comment if you have a question about why I need to use Redis.
(Redis can help you to share data between IdentityServer and your host application.)
"run the command docker run --nameredis-container -p 6379:6379 -d redis and change the redis connection string in your appsettings to localhost:6379."
https://github.com/abpframework/abp/issues/3487#issuecomment-611208048

Backing up redis at a specific time of day

Is there any way to schedule redis back-ups at a specific time of day (e.g. 3:00 AM GMT) - preferably via a setting in the accompanying conf file?
I already understand that one can set a backup rule in redis configuration (e.g. save every X hours if Y keys have changed).
But how does one schedule the said backup at a particular time of day? Would love to know something basic, but effective. In case it matters, my redis version is 5.0.3
So far I know it is currently not possible from inside redis. But its achievable using crontab. Here is a short example:
create a backup script file:
/tmp/backup.sh
echo save | redis-cli >> /tmp/redis-backup.log
If using sockets, the above would be:
echo save | redis-cli -s /var/run/redis.sock >> /tmp/redis-backup.log
The socket location in your system may vary.
Next, give execute permission to the script:
chmod +x /tmp/backup.sh
Finally, make an entry in crontab: crontab -e
0 3 * * * /tmp/backup.sh
This will run backup.sh in exactly 3AM.
In case you want to disable redis saving setup in the conf (without restarting the redis instance), the best way is to log into redis-cli and issue CONFIG SET save "". Double check that it worked via CONFIG GET save. Finally, don't forget to change the save settings in the relevant conf file as well. Lastly, it's wiser to use bgsave instead of save if tackling a redis instance in production.
For more, checkout these links:
How To Back Up and Restore Your Redis Data
Cron Scheduler
How To Start/Stop/Restart Cron Service In Linux

Need Persistent key value store which can be accessed via http

I am looking for a persistent key DB which can be accessed via HTTP. I need to use it for storing postman test script data. I have heard of rocksdb and leveldb, but I am not sure whether they can be accessed via HTTP.
leveldb and rocksdb don't have a network component.
I created a small python project that does expose a document datastore like API that you can query using REST. Have a look at it https://github.com/amirouche/deuspy. It rely on leveldb for persistence.
There is a python asyncio client. You can create a client on your own it's very easy.
To get started, you can simply do the following:
pip3 install deuspy
python3 -m deuspy.server
And then start querying.
Here is an example curl-based session:
$ curl -X GET http://localhost:9990
{}
$ curl -X POST --data '{"héllo": "world"}' http://localhost:9990
3252169150753703489
$ $ curl -X GET http://localhost:9990/3252169150753703489
{"h\u00e9llo": "world"}
You can also filter documents. Look at how is implemented the asyncio client.
Take a look at Webdis which provides HTTP REST API access to Redis key value store. Redis has very good performance and scalability.

Mappings between Docker Remote API and its command line client

Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...
docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container