Error uploading a file with TUSD to Cloudflare R2 - cloudflare

I am trying to use tus-js-client with TUSD Docker image to upload a file to Cloudflare R2.
I've used the same docker-compose.yml to upload to AWS S3 bucket successfully, like shown.
TUSD provides a way to upload a file using a S3 compatible endpoint with -s3-endpoint parameter and Cloudflare R2 provides such endpoint and the credentials needed (Access Key ID and Secret Access Key).
When the file is sended by the frontend app the first request worked fine, creating the metadata .info file into the R2 bucket.
The problem begins with the subsequent requests to upload the file chuncks.
Looking the request at the console there is no response with a status code. The error below is presented by the tus-js-client. The next retry is a HEAD request and returns a 404 status code.
I've tried at first with a start a nginx reverse proxy with -behind-proxy=true and after the error I tried with -behind-proxy=false. Nothing changed.
I removed the nginx and tried again only with tusd server. Nothing changed too.
tus-js-client error
Error: tus: failed to upload chunk at offset 0, caused by [object ProgressEvent], originated from request (method: PATCH, url: http://12.23.45.67/files/3087adfc92d0d045b5b28c73ee32289a+ALtBKUtgElTHNGK2gGlbb4K9ne0uZmmAQdXz7mFZrvAAZkACIDmLY+0L+DvFlSwksYLGtil11ve/LH5UqOKoxvywNSKzLZHX8IwEkplpOk935LDJQTVBANm5PILybEhgcCwPrH3039iCaoa7QOKM4XfOFroZ0p7RO7FPaKrgys97n+h5vXBD87KiFcl2RDUnUdS2UkA1I1YXrCeSd87Yi3nzK2wBWjBPm3PX94ltujtvtVXOPU3pIO7sEaPZQBDGxxLdBStxVFnMNYMfxuiBKiF/9RkEw7kcWdGGApfmG5FvYNfBfM3sF/Wl/BYHw0HwfA==, response code: n/a, response text: n/a, request id: n/a)
TUSD log
tus_1 | [tusd] 2022/10/29 01:56:01.822078 Using 0.00MB as maximum size.
tus_1 | [tusd] 2022/10/29 01:56:01.822475 Using 0.0.0.0:1080 as address to listen.
tus_1 | [tusd] 2022/10/29 01:56:01.822804 Using /files/ as the base path.
tus_1 | [tusd] 2022/10/29 01:56:01.823180 Using /metrics as the metrics path.
tus_1 | [tusd] 2022/10/29 01:56:01.823545 Supported tus extensions: creation,creation-with-upload,termination,concatenation,creation-defer-length
tus_1 | [tusd] 2022/10/29 01:56:01.824120 You can now upload files to: http://0.0.0.0:1080/files/
tus_1 | [tusd] 2022/10/29 01:56:15 event="RequestIncoming" method="POST" path="" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:16 event="UploadCreated" id="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" size="1053651" url="http://52.87.247.28/files/54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g=="
tus_1 | [tusd] 2022/10/29 01:56:16 event="ResponseOutgoing" status="201" method="POST" path="" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:16 event="RequestIncoming" method="OPTIONS" path="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:16 event="ResponseOutgoing" status="200" method="OPTIONS" path="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:16 event="RequestIncoming" method="PATCH" path="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:17 event="RequestIncoming" method="PATCH" path="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" requestId=""
tus_1 | [tusd] 2022/10/29 01:56:17 event="RequestIncoming" method="HEAD" path="54a4a47b064f1d1033c6a2ab31b5f08d+AO8ImmEviOzNx4pKspyfdIAO/LhLOo89yfvbMmgpyyrewhAmEGeO0sNici3idH/wBqomvVuItdUUWecHMKAdf4U0P+rqDSajq6LRE8pxmbS/aB++tChiN93FEtpMIZUXIyQOFq9L/cspKb1ocUzzXBFdR1n+EAjLwi8mJ3Sqw6Dj9CRxff9jQ7WkRgHOGUeYebLXTzNZnNvv86IWDCekPCPj1BoztjPM2nS7+1HYABHShcOfioQ6C42rYUkfLWV4eU4yRClMMemvql+FjgdtTOrQrjASEj8SRbjr3Rvhs3iix3h7peqs5p2gUvGvhrbw/g==" requestId=""
docker-compose.yml
version: '2'
services:
tus:
image: tusproject/tusd
command: -s3-bucket=my-bucket -behind-proxy=false -s3-endpoint=https://abcdef.r2.cloudflarestorage.com
environment:
- AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXX
- AWS_SECRET_ACCESS_KEY=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
- AWS_REGION=us-east-1
ports:
- "80:1080"
restart: always
networks:
- code-network
networks:
code-network:
driver: bridge

Related

Setting up S3 compatible service for blob storage on Google Cloud Storage

PS: cross posted on drone forums here.
I'm trying to setup s3 like service for drone logs. i've tested that my AWS_* values are set correctly in the container and using aws-cli from inside container gives correct output for:
aws s3api list-objects --bucket drone-logs --endpoint-url=https://storage.googleapis.com
however, drone server itself is unable to upload logs to the bucket (with following error):
{"error":"InvalidArgument: Invalid argument.\n\tstatus code: 400, request id: , host id: ","level":"warning","msg":"manager: cannot upload complete logs","step-id":7,"time":"2023-02-09T12:26:16Z"}
drone server on startup shows that s3 related configuration was picked correctly:
rpc:
server: ""
secret: my-secret
debug: false
host: drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
proto: https
s3:
bucket: drone-logs
prefix: ""
endpoint: https://storage.googleapis.com
pathstyle: true
the env. vars inside droner server container are:
# env | grep -E 'DRONE|AWS' | sort
AWS_ACCESS_KEY_ID=GOOGXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_COOKIE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_DATABASE_DATASOURCE=postgres://drone:XXXXXXXXXXXXXXXXXXXXXXXXXXXXX#35.XXXXXX.XXXX:5432/drone?sslmode=disable
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_ID=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_GITHUB_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_JSONNET_ENABLED=true
DRONE_LOGS_DEBUG=true
DRONE_LOGS_TRACE=true
DRONE_RPC_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_S3_BUCKET=drone-logs
DRONE_S3_ENDPOINT=https://storage.googleapis.com
DRONE_S3_PATH_STYLE=true
DRONE_SERVER_HOST=drone.XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
DRONE_SERVER_PROTO=https
DRONE_STARLARK_ENABLED=true
the .drone.yaml that is being used is available here, on github.
the server is running using the nolimit flag:
go build -tags "nolimit" github.com/drone/drone/cmd/drone-server

HTTP ERROR 401 Unauthorized when access ActiveMQ admin console

I installed ActiveMQ version 5.17.0, and I start it by using the CMD on Windows. It started, but when I access the admin console it shows 401 error message without popup to enter username/password.
This is the log when starting:
INFO | Connector stomp started
INFO | Listening for connections at: mqtt://VN-PF2MF5K6:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector mqtt started
INFO | Starting Jetty server
INFO | Creating Jetty connector
WARN | ServletContext#o.e.j.s.ServletContextHandler#40e60ece{/,null,STARTING} has uncovered http methods for path: /
INFO | Listening for connections at ws://VN-PF2MF5K6:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600
INFO | Connector ws started
INFO | Apache ActiveMQ 5.17.0 (localhost, ID:VN-PF2MF5K6-51292-1660105581127-0:1) started
INFO | For help or more information please see: http://activemq.apache.org
INFO | ActiveMQ WebConsole available at http://127.0.0.1:8161/
INFO | ActiveMQ Jolokia REST API available at http://127.0.0.1:8161/api/jolokia/
And when I access the admin console at localhost:8161/admin
HTTP ERROR 401 Unauthorized
URI: /admin
STATUS: 401
MESSAGE: Unauthorized
SERVLET: -
How can I resolve it?

Express-Gateway, serve same API path/route, but under different ServiceEndpoints

I have a server in Node.js + Express, that exposes some APIs both to public and admins. Actually I have 2 cloned instances running, one for Test, one for Production. They are twins, exposing same routes (/admin, /public), but are connected to two different DataBases, and are deployed on two different addresses.
I want to use Express-Gateway to provide APIs to 3d parties, so I'll give them firstly access to the Test Server. Once it's all done, I'll give them also the Production access.
To do this, my Idea is to create just 1 eg user, with multiple eg application. Each eg application will have eg credentials to access to Server or Production.
http://server_test.com
|-------------| |-------------|
| App Prod | | Server Test |
+----► | scopes: |------+ +-----► | /public |
| | [api_prod] | | | | /admin |
| |-------------| ▼ | |-------------|
| http://gateway.com
|------| |------------|
| User | | Express |
|------| | Gateway |
| |-------------| |------------|
| | App Test | ▲ | http://server_prod.com
+----► | scopes: | | | |-------------|
| [api_test] |------+ +-----► | Server Prod |
|-------------| | /public |
| /admin |
|-------------|
According to the credentials provided, the Gateway should redirect requests to server_test.com or server_prod.com. My idea was to use eg scopes to address requests to the right endpoint. So Server Test policy, will require the scope api_test, while Server Prod will require api_prod scope.
Anyway this solution doesn't work, because if the first match in apiEndpoints fails, it just results in "Not Found".
Example: I make a request to http://gateway.com/public using App Prod credentials, with api_prod scope. It should be passed to http://server_prod.com/public, but instead It matches first paths: '/*' of testEndpoint, and fails the scopes condition. So it just fails, while the correct apiEndpoint should be prodEndpoint.
How can I solve this problem?
This is my gateway.config.yml
apiEndpoints:
testEndpoint
host:*
paths: '/*' # <--- match this
scopes: ["api_test"] # <--- but fails this
prodEndpoint
host:*
paths: '/*'
scopes: ["api_prod"] # <---- this is right
serviceEndpoints
testService
url: "http://server_test.com"
prodService
url: "http://server_prod.com"
policies
- proxy
pipelines
testEndpoint: # Test
apiEndpoints:
- testEndpoint
policies:
- proxy
- action
serviceEndpoint: testService
prodEndpoint: # Prod
apiEndpoints:
- prodEndpoint
policies:
- proxy
- action
serviceEndpoint: prodService
I solved in this way: using -rewrite policy.
I prefix my clients' requests with /test or /prod
Use the prefix to match path the correct apiEndpoint
rewrite the request, deleting the prefix
choose the serviceEndpoint and go on...
http://server_test.com
|-------------| |-------------|
| App Prod | /prod/admin /admin | Server Test |
| scopes: |-------------+ +--------► | /public |
| [api_prod] | | | | /admin |
|-------------| ▼ | |-------------|
http://gateway.com
|------------|
| Express |
| Gateway |
|-------------| |------------|
| App Test | ▲ | http://server_prod.com
| scopes: | | | |-------------|
| [api_test] |-------------+ +---------► | Server Prod |
|-------------| /test/admin /admin | /public |
| /admin |
|-------------|
This is my config file:
apiEndpoints:
testEndpoint
host:*
paths: '/test/*'
scopes: ["api_test"]
prodEndpoint
host:*
paths: '/prod/*'
scopes: ["api_prod"]
serviceEndpoints
testService
url: "http://server_test.com"
prodService
url: "http://server_prod.com"
policies
- proxy
pipelines
testEndpoint: # Test
apiEndpoints:
- testEndpoint
policies:
- rewrite: # rewrite - delete '/test'
-
condition:
name: regexpmatch
match: ^/test/?(.*)$
action:
rewrite: /$1
- proxy
- action
serviceEndpoint: testService
prodEndpoint: # Prod
apiEndpoints:
- prodEndpoint
policies:
- rewrite: # rewrite - delete '/prod'
-
condition:
name: regexpmatch
match: ^/prod/?(.*)$
action:
rewrite: /$1
- proxy
- action
serviceEndpoint: prodService

Connecting to Kerberized solr on cloudera from karaf

I'm trying to connect to Solr (non cloud) which has Kerberos enabled from my SolrJ application running in Karaf container.
With Kerberos disabled, I'm able to connect fine.
With Kerberos enabled, I'm able to connect outside of Karaf by running a simple SolrClient class.
But its not working from within karaf.
Code:
System.setProperty("java.security.auth.login.config", "<path to jaas.conf file>");
String urlString = "http://<IP>:8983/solr/test";
SolrServer server = new HttpSolrServer(urlString);
QueryResponse sresponse = server.query( squery );
Exception in Karaf on trying to query:
2016-12-15 15:02:17,969 | WARN | l Console Thread | RequestTargetAuthentication | ? ? | 271 - wrap_mvn_org.apache.httpcomponents_httpclient_4.3.2 - 0.0.0 | NEGOTIATE authentication error: No valid credentials p
rovided (Mechanism level: No valid credentials provided (Mechanism level: Invalid option setting in ticket request. (101)))
2016-12-15 15:03:10,731 | ERROR | l Console Thread | Error:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSol
rException: Expected mime type application/octet-stream but got text/html. Apache Tomcat/6.0.44 - Error report HTTP Status 401 - Authentication requiredtype Status reportmessage Authentication requireddescription This request requires HTTP authentication.Apache Tomcat/6.0.44

Removing an unnecessary login module in Apache Karaf

This question was originally posted on the karaf users mailing list, but I didn't get an answer:
http://karaf.922171.n3.nabble.com/Deleting-an-unnecessary-login-module-td4033321.html
I would like to remove a login module (PublicKeyLoginModule) from the default jaas karaf realm.
According to the docs:
http://karaf.apache.org/manual/latest/developers-guide/security-framework.html
“So if you want to override the default security configuration in Karaf (which is used by the ssh shell, web console and
JMX layer), you need to deploy a JAAS configuration with the name name="karaf" and rank="1".”
However, when I do this new modules are added rather than replacing the existing ones.
When the blueprint below is loaded via either the deploy dir or via inclusion in a bundle (created using Maven by including the blueprint from the following path)
src\main\resources\OSGI-INF\blueprint\context.xml
I get the following:
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.properties.PropertiesLoginModule
2 | karaf | org.apache.karaf.jaas.modules.publickey.PublickeyLoginModule
3 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
What I would like to see is either
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
Or, if there were a way to explicitly delete a module:
karaf#root()> jaas:realm-list
Index | Realm Name | Login Module Class Name
-----------------------------------------------------------------------------------
1 | karaf | org.apache.karaf.jaas.modules.properties.PropertiesLoginModule
2 | karaf | org.apache.karaf.jaas.modules.ldap.LDAPLoginModule
This is the blueprint:
<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0"
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"
xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0">
<type-converters>
<bean class="org.apache.karaf.jaas.modules.properties.PropertiesConverter"/>
</type-converters>
<!-- Allow usage of System properties, especially the karaf.base property -->
<ext:property-placeholder placeholder-prefix="$[" placeholder-suffix="]"/>
<!-- AdminConfig property place holder for the org.apache.karaf.jaas -->
<cm:property-placeholder persistent-id="org.apache.karaf.jaas" update-strategy="none">
<cm:default-properties>
<cm:property name="example.group" value="example-group-value"/>
</cm:default-properties>
</cm:property-placeholder>
<jaas:config name="karaf" rank="1">
<jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule" flags="required">
connection.url = ldap://ldap.example.com:389
user.base.dn = o= example.com
user.filter = (uid=%u)
user.search.subtree = true
role.base.dn = ou=applications,l=global,o= example.com
role.filter = (&(objectClass=groupOfUniqueNames)(uniqueMember=*uid=%u*)(cn=${ example.group}))
role.name.attribute = cn
role.search.subtree = true
authentication = simple
</jaas:module>
</jaas:config>
</blueprint>
karaf#root()> shell:info
Karaf
Karaf version 3.0.0
Karaf home ***
Karaf base ***
OSGi Framework org.apache.felix.framework - 4.2.1
Same issue on Karaf 3.0.1
I'd welcome any suggestions. Creating a whole new realm is a possibility, but for policy reasons I'd prefer not to have the PublicKeyLoginModule visible in the runtime at all.
As a workaround you can try this:
Default karaf realm is registered in org.apache.karaf.jaas.module bundle with blueprint.
Find the original JaasRealm service named karaf from the service registry and unregister it; then register your own realm using the above blueprint.