I been testing Ceph with s3
my test ENV is a 3node with an datadisk of 10GB each so 30GB
its set to replicate 3 times. so i have "15290 MB" space available.
I got the S3 bucket working and been uploading files, and filled up the storage, tried to remove the said files but the disks are still show as full
cluster 4ab8d087-1802-4c10-8c8c-23339cbeded8
health HEALTH_ERR
3 full osd(s)
full flag(s) set
monmap e1: 3 mons at {ceph-1=xxx.xxx.xxx.3:6789/0,ceph-2=xxx.xxx.xxx.4:6789/0,ceph-3=xxx.xxx.xxx.5:6789/0}
election epoch 30, quorum 0,1,2 ceph-1,ceph-2,ceph-3
osdmap e119: 3 osds: 3 up, 3 in
flags full,sortbitwise,require_jewel_osds
pgmap v2224: 164 pgs, 13 pools, 4860 MB data, 1483 objects
14715 MB used, 575 MB / 15290 MB avail
164 active+clean
I am not sure how to get the disk space back?
Can any one advise on what i have done wrong or have missed
I'm beginning with ceph and had the same problem.
try running the garbage collector
list what will be deleted
radosgw-admin gc list --include-all
then run it
radosgw-admin gc process
if it didn't work (like for me with most of my data)
find the bucket with your data :
ceph df
Usually your S3 data goes in the default pool default.rgw.buckets.data
purge it from every object /!\ you will loose all your data /!\
rados purge default.rgw.buckets.data --yes-i-really-really-mean-it
I don't know why ceph is not purging this data itself for now (still learning...).
Thanks to Julien on this info
you be right with steps 1 and 2
when you run
radosgw-admin gc list --include-all
you see an example like
[
{
"tag": "17925483-8ff6-4aaf-9db2-1eafeccd0454.94098.295\u0000",
"time": "2017-10-27 13:51:58.0.493358s",
"objs": [
{
"pool": "default.rgw.buckets.data",
"oid": "17925483-8ff6-4aaf-9db2-1eafeccd0454.24248.3__multipart_certs/boot2docker.iso.2~UQ4MH7uZgQyEd3nDZ9hFJr8TkvldwTp.1",
"key": "",
"instance": ""
},
{
"pool": "default.rgw.buckets.data",
"oid": "17925483-8ff6-4aaf-9db2-1eafeccd0454.24248.3__shadow_certs/boot2docker.iso.2~UQ4MH7uZgQyEd3nDZ9hFJr8TkvldwTp.1_1",
"key": "",
"instance": ""
}, ....
if you notice the time
2017-10-27 13:51:58.0.493358s
when running
radosgw-admin gc process
It will only clear/remove parts that are older then the time feild
eg i can run "radosgw-admin gc process" over and over again but the files wont be removed till till after "2017-10-27 13:51:58.0.493358s"
but you are also right
rados purge default.rgw.buckets.data --yes-i-really-really-mean-it
works as well
You can list all buckets to be processed by GC (garbage collection) with:
radosgw-admin gc list --include-all
Then you can check that GC will run after specified time. Manually you can run:
radosgw-admin gc process --include-all
It will start process of garbage collection and with "--include-all" will process all entries, including unexpired.
Then you can check the progress of clean-up with:
watch -c "ceph -s"
or simply check the result with "ceph -s" that all buckets supposed to be deleted are gone. Documentation regarding GC settings you can find here:
https://docs.ceph.com/en/quincy/radosgw/config-ref/#garbage-collection-settings
Related
I am (unfortunately) using Hitachi Content Platform for S3 object storage, and I need to sync around 400 images to a bucket every 2 minutes. The filenames are always the same, and the sync "updates" the original file with the latest image.
Originally, I was unable to overwrite existing files. Unlike other platforms, on HCP, you cannot update a file that already exists when versioning is disabled, it returns a 409 and won't store the file, so I've enabled versioning which allows the files to be overwritten.
The issue now is that HCP is set to retain old versions for 0 days for my bucket (which my S3 admin says should cause it to retain no versions) and "Keep deleted versions" is also disabled, but the bucket is still filling up with objects (400 files every 2 minutes = ~288K per day). It seems to cap out at this amount, after the first day it remains at 288K permanently (which seems like it's eventually removing the old versions after 1 day).
Here's an example script that simulates the problem:
# Generate 400 files with the current date/time in them
for i in $(seq -w 1 400); do
echo $(date +'%Y%m%d%H%M%S') > "file_${i}.txt"
done
# Sync the current directory to the bucket
aws --endpoint-url $HCP_HOST s3 sync . s3://$HCP_BUCKET/
# Run this a few times to simulate the 2 minute upload cycle
The initial sync is very quick, and takes less than 5 seconds, but throughout the day it becomes slower and slower as the bucket begins to get more versions, eventually taking sometimes over 2 minutes to sync the files (which is bad since I need to sync the files every 2 minutes).
If I try to list the objects in the bucket after 1 day, only 400 files come back in the list, but it can take over 1 minute to return (which is why I need to add --cli-read-timeout 0):
# List all the files in the bucket
aws --endpoint-url $HCP_HOST s3 ls s3://$HCP_BUCKET/ --cli-read-timeout 0 --summarize
# Output
Total Objects: 400
Total Size: 400
I can also list and see all of the old unwanted versions:
# List object versions and parse output with jq
aws --endpoint-url $HCP_HOST s3api list-object-versions --bucket $HCP_BUCKET --cli-read-timeout 0 | jq -c '.Versions[] | {"key": .Key, "version_id": .VersionId, "latest": .IsLatest}'
Output:
{"key":"file_001.txt","version_id":"107250810359745","latest":false}
{"key":"file_001.txt","version_id":"107250814851905","latest":false}
{"key":"file_001.txt","version_id":"107250827750849","latest":false}
{"key":"file_001.txt","version_id":"107250828383425","latest":false}
{"key":"file_001.txt","version_id":"107251210538305","latest":false}
{"key":"file_001.txt","version_id":"107251210707777","latest":false}
{"key":"file_001.txt","version_id":"107251210872641","latest":false}
{"key":"file_001.txt","version_id":"107251212449985","latest":false}
{"key":"file_001.txt","version_id":"107251212455681","latest":false}
{"key":"file_001.txt","version_id":"107251212464001","latest":false}
{"key":"file_001.txt","version_id":"107251212470209","latest":false}
{"key":"file_001.txt","version_id":"107251212644161","latest":false}
{"key":"file_001.txt","version_id":"107251212651329","latest":false}
{"key":"file_001.txt","version_id":"107251217133185","latest":false}
{"key":"file_001.txt","version_id":"107251217138817","latest":false}
{"key":"file_001.txt","version_id":"107251217145217","latest":false}
{"key":"file_001.txt","version_id":"107251217150913","latest":false}
{"key":"file_001.txt","version_id":"107251217156609","latest":false}
{"key":"file_001.txt","version_id":"107251217163649","latest":false}
{"key":"file_001.txt","version_id":"107251217331201","latest":false}
{"key":"file_001.txt","version_id":"107251217343617","latest":false}
{"key":"file_001.txt","version_id":"107251217413505","latest":false}
{"key":"file_001.txt","version_id":"107251217422913","latest":false}
{"key":"file_001.txt","version_id":"107251217428289","latest":false}
{"key":"file_001.txt","version_id":"107251217433537","latest":false}
{"key":"file_001.txt","version_id":"107251344110849","latest":true}
// ...
I thought I could just run a job that cleans up the old versions on a regular basis, but I've tried to delete the old versions and it fails with an error:
# Try deleting an old version for the file_001.txt key
aws --endpoint-url $HCP_HOST s3api delete-object --bucket $HCP_BUCKET --key "file_001.txt" --version-id 107250810359745
# Error
An error occurred (NotImplemented) when calling the DeleteObject operation:
Only the current version of an object can be deleted.
I've tested this using MinIO and AWS S3 and my use-case works perfectly fine on both of those platforms.
Is there anything I'm doing incorrectly, or is there a setting in HCP that I'm missing that could make it so I can overwrite objects on sync while retaining no previous versions? Alternatively, is there a way to manually delete the previous versions?
https://forums.virtualbox.org/viewtopic.php?f=7&t=90893
Hello im desesperate and need help because i have lost about two months of work in my Windows 10 guest system.
Everything worked smoothly till i need to have more free space ( although i have a dynamic hd). So i have follow some tutorials and made some changes:
1 - I have the original almost full disk in: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk
2 - I made a copy in an external usb device.
3 - Convert to vdi: VBoxManage clonehd /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk1.vmdk /media/eduardo/Seagate\ Backup\ Plus\ Drive/Windows10-disk.vdi --format vdi
4 - Tried to resize the disk ( from 80gb to 100gb): VBoxManage modifyhd /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 and VBoxManage modifymedium disk /media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk --resize 100000 ( think this could be an error as i had to chage size to vdi file).
5 - Then i had to change the uuid ( because an error of uuid in use arised):VBoxManage internalcommands sethduuid "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk"
6 - Then comeback to: VBoxManage clonehd "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk1.vmdk" " " --format vdi
and resize VBoxManage modifymedium disk "/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi" --resize 120000
I tried to change my virutal machine with the new vdi file to test if everything was fine ( change my /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk disk connection to the new/media/eduardo/Seagate Backup Plus Drive/Windows10-disk.vdi) . But i detected somewhat that the system has turned back two months ago !!!!
I was not worried and decided to go back to my "untouch" vmdk, but the most strange thing is that the original "untouch" file: /Maquinas VirtualBox/Clientes Windows/Windows 10/Windows10-disk1.vmdk also boots with things and files and state about two months ago. So im quite nervous.
Selección_058.png
Selección_058.png (65.19 KiB) Viewed 9 times
As watching files the 6c***** has to be the "good status" as was modified yesterday at night. Here is my file manager:
Selección_059.png
Selección_059.png (54.06 KiB) Viewed 9 times
Here is my VM ( made an snapshot about two months ago i dont remember when exactly)
https://imagebin.ca/v/4QlKV3Equ1fW
My log:
https://pastebin.com/JSLFRNMs
Hope anybody can help...
i think that the key is to return somewhat to 6c**** state of my vmdk file, i dont understand how this vmdk got changed as it was not touched
Thanks in advance
The problem was solved. It was nothing to do with resizing disks. I select the { 6cc3c***-*****} hard disk ( although it was "only" 47 gb), for surprise for me it load its "snapshot" part of 47 gb with the whole disk windows10-disk1.vmdk....
Sorry for my bad english, but its difficult to explain, in the settings of the virtual machine in storage section, select as main disk the 6cc***** and start/boot the VM
Once was loaded and working fine, i deleted the snapshot ( to bring all together to the present state) and then made another snapshot for backup.
Thanks
Is there a Mono profiler mode similar to Java -Xloggc?
I would like to see a human readable GC report while my application is running. Currently Mono can be run with --profile=log option but the output is in binary format and every time I need to run mprof-report to read it. The output file also contains a lot of info which is not interesting for me.
I tried to reduce the file size by specifying heapshot=14400000ms to collect statistics every few hours but it didn't help a lot. In a week I had few gigabytes log.
I also tried to use "sample" profiler but the overhead was too much.
You can use Mono's trace filters for this. Just set the MONO_LOG_MASK to gc and lower the MONO_LOG_LEVEL. Then run your app normally and you will get the human-readable GC statistics while your app is running:
$ export MONO_LOG_MASK=gc
$ export MONO_LOG_LEVEL=debug
$ mono ... # run your application normally ..
...
# notice the human readable GC output
mono: GC_MAJOR: (LOS overflow) pause 26.00ms, total 26.06ms, bridge 0.00ms major 31472K/0K los 1575K/0K
Mono: GC_MINOR: (Nursery full) pause 2.30ms, total 2.35ms, bridge 0.00ms promoted 31456K major 31456K los 5135K
Mono: GC_MINOR: (Nursery full) pause 2.43ms, total 2.45ms, bridge 0.00ms promoted 31456K major 31456K los 8097K
Mono: GC_MINOR: (Nursery full) pause 1.80ms, total 1.82ms, bridge 0.00ms promoted 31472K major 31472K los 11425K
I have some fairly simple Hadoop streaming jobs that look like this:
yarn jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-101.jar \
-files hdfs:///apps/local/count.pl \
-input /foo/data/bz2 \
-output /user/me/myoutput \
-mapper "cut -f4,8 -d," \
-reducer count.pl \
-combiner count.pl
The count.pl script is just a simple script that accumulates counts in a hash and prints them out at the end - the details are probably not relevant but I can post it if necessary.
The input is a directory containing 5 files encoded with bz2 compression, roughly the same size as each other, for a total of about 5GB (compressed).
When I look at the running job, it has 45 mappers, but they're all running on one node. The particular node changes from run to run, but always only one node. Therefore I'm achieving poor data locality as data is transferred over the network to this node, and probably achieving poor CPU usage too.
The entire cluster has 9 nodes, all the same basic configuration. The blocks of the data for all 5 files are spread out among the 9 nodes, as reported by the HDFS Name Node web UI.
I'm happy to share any requested info from my configuration, but this is a corporate cluster and I don't want to upload any full config files.
It looks like this previous thread [ why map task always running on a single node ] is relevant but not conclusive.
EDIT: at #jtravaglini's suggestion I tried the following variation and saw the same problem - all 45 map jobs running on a single node:
yarn jar \
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.2.0.2.0.6.0-101.jar \
wordcount /foo/data/bz2 /user/me/myoutput
At the end of the output of that task in my shell, I see:
Launched map tasks=45
Launched reduce tasks=1
Data-local map tasks=18
Rack-local map tasks=27
which is the number of data-local tasks you'd expect to see on one node just by chance alone.
So I accidentally started 2 ElasticSearch instances on the same machine. One with port 9200, the other with port 9201. This means there's 2 cluster nodes, each with the same name, and each has 1/2 of the total shards for each index.
If I kill one of the instances, I now end up with 1 instance having 1/2 the shards.
How do I fix this problem? I want to have just 1 instance with all the shards in it (like it used to be)
SO... there is a clean way to resolve this. Although I must say the ElasticSearch documentation is very very confusing (all these buzzwords like cluster and zen discovery boggles my mind!)
1)
Now, if you have 2 instances, one in port 9200, and the other in 9201. And you want ALL the shards to be in 9200.
Run this command to disable allocation in the 9201 instance. You can change persistent to transient if you want this change to not be permanent. I'd keep it persistent so this doesn't ever happen again.
curl -XPUT localhost:9201/_cluster/settings -d '{
"persistent" : {
"cluster.routing.allocation.disable_allocation" : true
}
}'
2) Now, run the command to MOVE all the shards in the 9201 instance to 9200.
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
"commands" : [ {
"move" :
{
"index" : "<NAME OF INDEX HERE>", "shard" : <SHARD NUMBER HERE>,
"from_node" : "<ID OF 9201 node>", "to_node" : "<ID of 9200 node>"
}
}
]
}'
You need to run this command for every shard in the 9201 instance (the one you wanna get rid of).
If you have ElasticSearch head, that shard will be purple, and will have "REALLOCATING" status. If you have lots of data, say > 1 GB, it will take awhile for the shard to move - perhaps up to a hour or even more, so be patient. Don't shutdown the instance/node until everything is done moving.
That's it!