Ceph S3 / Swift bucket create failed / error 416 - amazon-s3

I am getting 416 errors while creating buckets using S3 or Swift. How to solve this?
swift -A http://ceph-4:7480/auth/1.0 -U testuser:swift -K 'BKtVrq1...' upload testas testas
Warning: failed to create container 'testas': 416 Requested Range Not Satisfiable: InvalidRange
Object PUT failed: http://ceph-4:7480/swift/v1/testas/testas 404 Not Found b'NoSuchBucket'
Also S3 python test:
File "/usr/lib/python2.7/dist-packages/boto/s3/connection.py", line 621, in create_bucket
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 416 Requested Range Not Satisfiable
<?xml version="1.0" encoding="UTF-8"?><Error><Code>InvalidRange</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000000002a-005a69b12d-1195-default</RequestId><HostId>1195-default-default</HostId></Error>
Here is my ceph status:
cluster:
id: 1e4bd42a-7032-4f70-8d0c-d6417da85aa6
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-2,ceph-3,ceph-4
mgr: ceph-1(active), standbys: ceph-2, ceph-3, ceph-4
osd: 3 osds: 3 up, 3 in
rgw: 2 daemons active
data:
pools: 7 pools, 296 pgs
objects: 333 objects, 373 MB
usage: 4398 MB used, 26309 MB / 30708 MB avail
pgs: 296 active+clean
I am using CEPH Luminous build with bluestore
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
User created:
sudo radosgw-admin user create --uid="testuser" --display-name="First User"
sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
Logs on osd:
2018-01-25 12:19:45.383298 7f03c77c4700 1 ====== starting new request req=0x7f03c77be1f0 =====
2018-01-25 12:19:47.711677 7f03c77c4700 1 ====== req done req=0x7f03c77be1f0 op status=-34 http_status=416 ======
2018-01-25 12:19:47.711937 7f03c77c4700 1 civetweb: 0x55bd9631d000: 192.168.109.47 - - [25/Jan/2018:12:19:45 +0200] "PUT /mybucket/ HTTP/1.1" 1 0 - Boto/2.38.0 Python/2.7.12 Linux/4.4.0-51-generic
Linux ubuntu, 4.4.0-51-generic

set default pg_num and pgp_num to lower value(8 for example), or set mon_max_pg_per_osd to a high value in ceph.conf

Related

Filebeat not read all logs from directory

I am configuring filebeat to send to elastic logs located in /var/log/myapp/batch_*
Here my filebeat configuration:
# Version
filebeat version 7.11.0 (amd64), libbeat 7.11.0 [84c4d4c4034fcb49c1a318ccdc7311d70adee15b built 2021-02-08 22:42:11 +0000 UTC]
# Filebeat config
logging.metrics.period: 1h
logging.to_files: true
logging.files:
rotateeverybytes: 16777216
keepfiles: 7
permissions: 0600
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*"
output.elasticsearch:
hosts: ["server:9200"]
index: "log_test_app-%{+yyyy.MM.dd}"
setup.ilm.enabled: false
setup.template.name: "log_test_app"
setup.template.pattern: "log_test_app-*"
setup.template.overwrite: false
setup.template.settings:
index.number_of_shards: 3
index.number_of_replicas: 1
I only see that two logs are being sent and within the established directory there are a total of eight logs:
2022-05-24T19:39:55.904Z INFO log/input.go:157 Configured paths: [/var/log/myapp/batch_*]
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:141 Starting input (ID: 3328309751929357009)
2022-05-24T19:39:55.904Z INFO [crawler] beater/crawler.go:108 Loading and starting Inputs completed. Enabled inputs: 1
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/myapp/batch_emails.log
2022-05-24T19:44:55.905Z INFO log/harvester.go:302 Harvester started for file: /var/log/loyalty/batch_import.log
I show you a list of directory files:
ls -l /var/log/loyalty/batch_*
-rw-r--r-- 1 batch batch 154112 May 24 03:20 /var/log/myapp/batch_gifts.log
-rw-r--r-- 1 batch batch 112319 May 24 02:30 /var/log/myapp/batch_http.log
-rw-r--r-- 1 batch batch 7575342 May 24 02:30 /var/log/myapp/batch_vouchers.log
-rw-r--r-- 1 batch batch 4847849 May 24 19:30 /var/log/myapp/batch_ftp.log
-rw-r--r-- 1 batch batch 99413 May 24 03:40 /var/log/myapp/batch_category.log
-rw-r--r-- 1 root root 367207 May 24 19:50 /var/log/myapp/batch_emails.log
-rw-r--r-- 1 batch batch 479 Jan 1 23:00 /var/log/myapp/batch_history.log
-rw-r--r-- 1 batch batch 2420916 Jan 1 23:00 /var/log/myapp/batch_lists.php
-rw-r--r-- 1 batch batch 25779499 May 24 19:50 /var/log/myapp/batch_import.log
Is there something wrong with my setup? I tried using the ignore_older parameter: 36h but only two log files are processed.
Thanks for the help.
Welcome to stack overflow, Emanuel :)
I believe you are reading only the log files (*.log) thus you can mention the same.
filebeat.inputs:
- type: log
enabled: true
scan_frequency: 5m
paths:
- "/var/log/myapp/batch_*.log"
Keep Posted!!! Thanks!!!

Making Dockerized Flask server concurrent

I have a Flask server that I'm running on AWS Fargate. My task has 2 vCPUs and 8 GB of memory. My server is only able to respond to one request at a time. If I run 2 API requests at the same, each that takes 7 seconds, the first request will take 7 seconds to return and the second will take 14 seconds to return.
This is my Docker file (using this repo):
FROM tiangolo/uwsgi-nginx-flask:python3.7
COPY ./requirements.txt requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
RUN python3 -m spacy download en
RUN apt-get update
RUN apt-get install wkhtmltopdf -y
RUN apt-get install poppler-utils -y
RUN apt-get install xvfb -y
COPY ./ /app
I have the following config file:
[uwsgi]
module = main
callable = app
enable-threads = true
These are my logs when I start the server:
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2019-10-05 06:29:53,438 CRIT Supervisor running as root (no user in config file)
2019-10-05 06:29:53,438 INFO Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2019-10-05 06:29:53,446 INFO RPC interface 'supervisor' initialized
2019-10-05 06:29:53,446 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2019-10-05 06:29:53,446 INFO supervisord started with pid 1
2019-10-05 06:29:54,448 INFO spawned: 'nginx' with pid 9
2019-10-05 06:29:54,450 INFO spawned: 'uwsgi' with pid 10
[uWSGI] getting INI configuration from /app/uwsgi.ini
[uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
;uWSGI instance configuration
[uwsgi]
cheaper = 2
processes = 16
ini = /app/uwsgi.ini
module = main
callable = app
enable-threads = true
ini = /etc/uwsgi/uwsgi.ini
socket = /tmp/uwsgi.sock
chown-socket = nginx:nginx
chmod-socket = 664
hook-master-start = unix_signal:15 gracefully_kill_them_all
need-app = true
die-on-term = true
show-config = true
;end of configuration
*** Starting uWSGI 2.0.18 (64bit) on [Sat Oct 5 06:29:54 2019] ***
compiled with version: 6.3.0 20170516 on 09 August 2019 03:11:53
os: Linux-4.14.138-114.102.amzn2.x86_64 #1 SMP Thu Aug 15 15:29:58 UTC 2019
nodename: ip-10-0-1-217.ec2.internal
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /app
detected binary path: /usr/local/bin/uwsgi
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
Python version: 3.7.4 (default, Jul 13 2019, 14:20:24) [GCC 6.3.0 20170516]
Python main interpreter initialized at 0x55e1e2b181a0
uWSGI running as root, you can use --uid/--gid/--chroot options
*** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
python threads support enabled
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 1239640 bytes (1210 KB) for 16 cores
*** Operational MODE: preforking ***
2019-10-05 06:29:55,483 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-10-05 06:29:55,484 INFO success: uwsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

ssh key authenticated user unable to use apt-get update

Ubuntu Linux 16.04.5
I have configured a server for SSH Keys Authentication With PuTTY.
Key authentication works fine for the user account accessing the server, the problem is that when attempting to utilize "apt-get update" the system returns:
tornado#freeradius:~$ apt-get update
Reading package lists... Done
W: chmod 0700 of directory /var/lib/apt/lists/partial failed - SetupAPTPartialDirectory (1: Operation not permitted)
E: Could not open lock file /var/lib/apt/lists/lock - open (13: Permission denied)
E: Unable to lock directory /var/lib/apt/lists/
W: Problem unlinking the file /var/cache/apt/pkgcache.bin - RemoveCaches (13: Permission denied)
W: Problem unlinking the file /var/cache/apt/srcpkgcache.bin - RemoveCaches (13: Permission denied)
I have seen on multiple posts to similar topics that the solution is the use "sudo apt-get update" - the issue with this is the user "tornado" has no password.
Additionally, I have configured "sshd_config":
ChallengeResponseAuthentication no
PasswordAuthentication no
The user "tornado" was created using "adduser tornado --disabled-password" option, so when attempting to "sudo apt-get update" I am prompted for the user "tornado's" non-existent password.
The permission of "home/tornado" was changed to 755 which made no change with the issue at hand, so I reverted the CHMOD.
"stat /home/tornado" output
File: '/home/tornado'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 19267592 Links: 4
Access: (2700/drwx--S---) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:28:01.035536814 -0700
Modify: 2018-09-14 10:50:00.864863347 -0700
Change: 2018-09-14 11:33:13.685020607 -0700
Birth: -
"stat /home/tornado/.ssh" output
File: '/home/tornado/.ssh'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc00h/64512d Inode: 19267599 Links: 2
Access: (2700/drwx--S---) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:28:01.035536814 -0700
Modify: 2018-09-14 10:44:34.251451611 -0700
Change: 2018-09-14 11:28:01.035536814 -0700
Birth: -
"stat /home/tornado/authorized_keys" output
File: '/home/tornado/.ssh/authorized_keys'
Size: 209 Blocks: 8 IO Block: 4096 regular file
Device: fc00h/64512d Inode: 19267590 Links: 1
Access: (0600/-rw-------) Uid: ( 1000/ tornado) Gid: ( 1000/ tornado)
Access: 2018-09-14 11:33:41.053150514 -0700
Modify: 2018-09-14 10:44:34.127451092 -0700
Change: 2018-09-14 11:28:01.035536814 -0700
Birth: -
Root privileges to the user "tornado" were granted as well ...
visudo
added:
# User privilege specification
tornado ALL=(ALL:ALL) ALL
I also attempted: "chown tornado:root /home/tornado"
which made no change to the situation.
Solved
A password was added to the user "tornado" which allows me to sudo into the root user which provides functionality.

Minishift: Problems creating virtual machine

my question about the installation of openshift environment using minishift on virtual box.
minishift v1.4.1+0f658ea
VirtualBox-5.1.26-117224-Win.exe
The installation is incomplete due to the folowing error:-
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.1.0/minishift-b2d.iso'
40.00 MiB / 40.00 MiB [===========================================] 100.00% 0s
-- Starting Minishift VM ... | Unsupported driver: C:\Program
So, to solve this I simply put the directory where all drivers are located in the installation and run it again
C:\Users\xyzdgs\Desktop\Openshift_n_Docker\OpenShift Developer>minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\
-- Starting local OpenShift cluster using 'C:\Program' hypervisor ...
-- Starting Minishift VM ... / FAIL E0825 11:20:43.830638 1260 start.go:342]
Error starting the VM: Error getting the state for host: machine does not exist.
Retrying.
| FAIL E0825 11:20:44.297638 1260 start.go:342] Error starting the VM: Error getting the state for host: machine does not exist. Retrying.
/ FAIL E0825 11:20:44.612638 1260 start.go:342] Error starting the VM: Error getting the state for host: . Retrying.
Error starting the VM: Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
Error getting the state for host: machine does not exist
It says "machine does not exist", shouldn't the machine be created by minishift itself (see te procedure here: blog.novatec-gmbh.de/getting-started-minishift-openshift-origin-one-vm/)
Not sure what is causing this. Please guide.
The main issue with the command -- and what it's really complaining about -- is that you're passing in an unquoted path:
minishift.exe start --vm-driver=C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe
should have been
minishift.exe start --vm-driver="C:\Program Files\Oracle\VirtualBox\VBoxSVC.exe"
But according to the MiniShift documentation, you should update to VirtualBox 5.1.12+ (which you have) and use the following syntax:
minishift.exe start --vm-driver=virtualbox
7 months after this question was asked and using VirtualBox v4.3.30, I can get MiniShift v1.15.1 running with the last command, but can't get it to accept your previous syntax or even produce the same error from it.

Unable to upload a file into OpenStack Swift 2.14.0 on a Raspberry Pi 3 because of "[Errno 13] Permission denied"

Creating and erasing buckets (containers) in my OpenStack Swift version 2.14.0 installation works well. It is a Swift only installation. No further OpenStack services like Keystone have been deployed.
$ swift stat
Account: AUTH_test
Containers: 2
Objects: 0
Bytes: 0
Containers in policy "policy-0": 2
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
Connection: keep-alive
...
$ swift post s3perf
$ swift list -A http://10.0.0.253:8080/auth/v1.0 -U test:tester -K testing
bucket
s3perf
These are the (positive) messages regarding bucket creation inside the file storage1.error.
$ tail -f /var/log/swift/storage1.error
...
May 9 13:58:50 raspberrypi container-server: STDERR: (1114) accepted
('127.0.0.1', 38118)
May 9 13:58:50 raspberrypi container-server: STDERR: 127.0.0.1 - -
[09/May/2017 11:58:50] "POST /d1/122/AUTH_test/s3perf HTTP/1.1" 204 142
0.021630 (txn: tx982eb25d83624b37bd290-005911aefa)
But any attempt to upload a file causes just an error message [Errno 13] Permission denied.
$ swift upload s3perf s3perf-testfile1.txt
Object PUT failed: http://10.0.0.253:8080/v1/AUTH_test/s3perf/s3perf-testfile1.txt
503 Service Unavailable [first 60 chars of response] <html><h1>Service
Unavailable</h1><p>The server is currently
$ tail -f /var/log/swift/storage1.error
...
May 18 20:55:44 raspberrypi object-server: STDERR: (927) accepted
('127.0.0.1', 45684)
May 18 20:55:44 raspberrypi object-server: ERROR __call__ error with PUT
/d1/40/AUTH_test/s3perf/testfile : #012Traceback (most recent call
last):#012 File "/home/pi/swift/swift/obj/server.py", line 1105, in
__call__#012 res = getattr(self, req.method)(req)#012 File
"/home/pi/swift/swift/common/utils.py", line 1626, in _timing_stats#012
resp = func(ctrl, *args, **kwargs)#012 File
"/home/pi/swift/swift/obj/server.py", line 814, in PUT#012
writer.put(metadata)#012 File "/home/pi/swift/swift/obj/diskfile.py",
line 2561, in put#012 super(DiskFileWriter, self)._put(metadata,
True)#012 File "/home/pi/swift/swift/obj/diskfile.py", line 1566, in
_put#012 tpool_reraise(self._finalize_put, metadata, target_path,
cleanup)#012 File "/home/pi/swift/swift/common/utils.py", line 3536, in
tpool_reraise#012 raise resp#012IOError: [Errno 13] Permission denied
(txn: txfbf08bffde6d4657a72a5-00591dee30)
May 18 20:55:44 raspberrypi object-server: STDERR: 127.0.0.1 - -
[18/May/2017 18:55:44] "PUT /d1/40/AUTH_test/s3perf/testfile HTTP/1.1"
500 875 0.015646 (txn: txfbf08bffde6d4657a72a5-00591dee30)
Also the proxy.error file contains an error message ERROR 500 Expect: 100-continue From Object Server.
May 18 20:55:44 raspberrypi proxy-server: ERROR 500 Expect: 100-continue
From Object Server 127.0.0.1:6010/d1 (txn: txfbf08bffde6d4657a72a5-
00591dee30) (client_ip: 10.0.0.220)
I have started Swift as user pi and assigned these folders to this user:
$ sudo chown pi:pi /etc/swift
$ sudo chown -R pi:pi /mnt/sdb1/*
$ sudo chown -R pi:pi /var/cache/swift
$ sudo chown -R pi:pi /var/run/swift
sdb1 is a loopback device with XFS file system.
$ mount | grep sdb1
/srv/swift-disk on /mnt/sdb1 type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,logbufs=8,noquota)
$ ls -ld /mnt/sdb1/1/
drwxr-xr-x 3 pi pi 17 May 12 13:14 /mnt/sdb1/1/
I deployed Swift this way.
I wonder why creating buckets (conrainers) works but the upload of a file fails because of Permission denied.
Update 1:
$ sudo swift-ring-builder /etc/swift/account.builder
/etc/swift/account.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/account.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6012 127.0.0.1:6012 d1 1.00 256 0.00
$ sudo swift-ring-builder /etc/swift/container.builder
/etc/swift/container.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/container.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6011 127.0.0.1:6011 d1 1.00 256 0.00
$ sudo swift-ring-builder /etc/swift/object.builder
/etc/swift/object.builder, build version 2
256 partitions, 1.000000 replicas, 1 regions, 1 zones, 1 devices, 0.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 0 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file /etc/swift/object.ring.gz is up-to-date
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 127.0.0.1:6010 127.0.0.1:6010 d1 1.00 256 0.00
Update 2
The required ports are open.
$ nmap localhost -p 6010,6011,6012,8080,22
...
PORT STATE SERVICE
22/tcp open ssh
6010/tcp open x11
6011/tcp open unknown
6012/tcp open unknown
8080/tcp open http-proxy
Update 3
I can write as user pi inside the folder where Swift should store the objects.
$ whoami
pi
$ touch /srv/1/node/d1/objects/test
$ ls -l /srv/1/node/d1/objects/test
-rw-r--r-- 1 pi pi 0 May 13 22:59 /srv/1/node/d1/objects/test
Update 4
All swift processes belong to user pi.
$ ps aux | grep swift
pi 944 3.2 2.0 24644 20100 ? Ss May12 65:14 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 945 3.1 2.0 25372 20228 ? Ss May12 64:30 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 946 3.1 1.9 24512 19416 ? Ss May12 64:03 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 947 3.1 1.9 23688 19320 ? Ss May12 64:04 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1000 0.0 1.7 195656 17844 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1001 0.0 1.8 195656 18056 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1002 0.0 1.6 23880 16772 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1003 0.0 1.7 195656 17848 ? Sl May12 0:01 /usr/bin/python /usr/local/bin/swift-object-server /etc/swift/object-server.conf
pi 1004 0.0 1.7 24924 17504 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1005 0.0 1.6 24924 16912 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1006 0.0 1.8 24924 18368 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1007 0.0 1.8 24924 18208 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-account-server /etc/swift/account-server.conf
pi 1008 0.0 1.8 25864 18824 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1009 0.0 1.8 25864 18652 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1010 0.0 1.7 25864 17340 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1011 0.0 1.8 25864 18772 ? S May12 0:01 /usr/bin/python /usr/local/bin/swift-container-server /etc/swift/container-server.conf
pi 1012 0.0 1.8 24644 18276 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1013 0.0 1.8 24900 18588 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1014 0.0 1.8 24900 18588 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
pi 1015 0.0 1.8 24900 18568 ? S May12 0:03 /usr/bin/python /usr/local/bin/swift-proxy-server /etc/swift/proxy-server.conf
Update 5
When I create a bucket, the Swift service creates a folder like this one:
/mnt/sdb1/1/node/d1/containers/122/9d5/7a23d9409f11da3062432c6faa75f9d5/
and this folder contains a db-file like 7a23d9409f11da3062432c6faa75f9d5.db. I think this is the correct behavior.
But when I try to upload a file inside a bucket, Swift creates just an empty folder like this one.
/mnt/sdb1/1/node/d1/objects/139/eca/8b17958f984943fc97b6b937061d2eca
I can create files inside these empty folders via touch or echo as user pi but for an unknown reason, Swift does not store files inside these folders.
Update 6
In order to investigate this issue further, I installed Swift according to the SAIO - Swift All In One instructions one time inside a VMware ESXi virtual machine with Ubuntu 14.04 LTS and another time inside Raspbian on a Raspberry Pi 3. The result is, that inside the Ubuntu 14.04 VM, Swift works perfectly, but when running ontop of the Rasberry Pi, uploading files does not work.
Object PUT failed: http://10.0.0.253:8080/v1/AUTH_test/s3perf-testbucket/testfiles/s3perf-testfile1.txt
503 Service Unavailable [first 60 chars of response]
<html><h1>Service Unavailable</h1><p>The server is currently
The storage1.error log file still says:
May 24 13:15:15 raspberrypi object-server: ERROR __call__ error with PUT
/sdb1/484/AUTH_test/s3perf-testbucket/testfiles/s3perf-testfile1.txt :
#012Traceback (most recent call last):#012 File
"/home/pi/swift/swift/obj/server.py", line 1105, in __call__#012 res =
getattr(self, req.method)(req)#012 File
"/home/pi/swift/swift/common/utils.py", line 1626, in _timing_stats#012
resp = func(ctrl, *args, **kwargs)#012 File
"/home/pi/swift/swift/obj/server.py", line 814, in PUT#012
writer.put(metadata)#012 File "/home/pi/swift/swift/obj/diskfile.py",
line 2561, in put#012 super(DiskFileWriter, self)._put(metadata,
True)#012 File "/home/pi/swift/swift/obj/diskfile.py", line 1566, in
_put#012 tpool_reraise(self._finalize_put, metadata, target_path,
cleanup)#012 File "/home/pi/swift/swift/common/utils.py", line 3536, in
tpool_reraise#012 raise resp#012IOError: [Errno 13] Permission denied
(txn: txdfe3c7f704be4af8817b3-0059256b43)
Update 7
The issue is still not fixed, but I have now a working Swift service on the Raspberry Pi. I installed the (quite outdated) Swift revision 2.2.0, which is shipped with Raspbian and it works well. The steps I did are explained here.
Based on the information that you provided, the errors occur during writing Metadata.
The operations for writing Metadata falls into two categories: manipulating inode and manipulating extended attributes. So there are two possible sources for your errors.
First, it is an inode related error. This error may occur due to setting the inode64 parameter while mounting the device. According to the XFS man page:
If applications are in use which do not handle inode numbers bigger than 32 bits, the inode32 option should be specified.
Second, it is an extended attributes related error. You can use the xattr package of python to write extended attributes, and check whether exception happens.