ETL Pull failing, error message giving mixed messages - gooddata

When following the instructions on http://developer.gooddata.com/article/loading-data-via-api, I always get a HTTP400 error:
400: Neither expected file "upload_info.json" nor archive "upload.zip" found (is accessible) in ""
When I HTTP GET the same path that I did for the HTTP PUT, the file downloads just fine.
Any pointers to what I'm probably doing wrong?

GoodData is going trough migration from AWS to RackSpace.
Try to change of all get/post/put requests:
secure.gooddata.com to na1.secure.gooddata.com
secure-di.gooddata.com to na1-di.gooddata.com

You can check the datacenter where the project is located via /gdc/projects/{projectId} resource - the "project.content.cluster" field.
For example:
https://secure.gooddata.com/gdc/projects/myProjectId:
{
"project" : {
"content" : {
"cluster" : "na1",
....
For AWS this field has an empty value, "na1" means rackspace.

Related

The Error "EOF" is occurred in Minio console login

I am trying to set secure access to stand-alone MinIO server using Docker container. I copied private.key and public.crt files to /root/.minio/certs. The ports is mapped like this: 9000:443 and 9001:9001.
When I access image uploaded MinIO through HTTPS, It is work well. But When I tried to login to MinIO web console, I got the simple error message: "EOF". Here is the capture image of console.
It is the returned message from API https://{$my_domain}:9001/api/v1/login, full reponse is as follows.
{
"code": 500,
"detailedMessage": "EOF",
"message": "invalid Login"
}
Any idea to solve this error?
I had the same error. The reason was, that I forgot to change the
MINIO_SERVER_URL=http... to MINIO_SERVER_URL=https...
I hope it helps.
I found cause of the problem myself. I forgot to change SERVER URL in env_file, so it was remained localhost.
MINIO_SERVER_URL=http://localhost:9000

ADLS Gen 2 Storage API - Refusing Http Verbs

I'm having a problem with some endpoints within the ADLS Gen 2 API Path operations.
I can create, list, get properties of, and delete file systems just fine.
However, after adding a directory to a file system, certain verbs are failing - HEAD, GET, and DELETE.
For example, I have created a filesystem named c79b0781, with a directory path of abc/def
Call failed with status code 400 (The HTTP verb specified is invalid - it is not recognized by the server.): DELETE https://myadls.dfs.core.windows.net/c79b0781/abc?recursive=true&timeout=30
For headers, I have:
x-ms-version: 2018-11-09
I can delete the filesystem from the Azure Storage Explorer, but the API is refusing my query.
The List action is also failing with a similar error
Call failed with status code 400 (The HTTP verb specified is invalid - it is not recognized by the server.): GET https://myadls.dfs.core.windows.net/c79b0781?resource=filesystem&recursive=false&timeout=30
With headers:
x-ms-version: 2018-11-09
And finally, my Get Properties is also failing
Call failed with status code 400 (The HTTP verb specified is invalid - it is not recognized by the server.): HEAD https://myadls.dfs.core.windows.net/c79b0781?resource=filesystem&timeout=30
It seems to only happen when I add directories to the file system.
A bit more in depth:
This Test works
PUT https://myadls.dfs.core.windows.net/c79b0781?resource=filesystem
GET https://myadls.dfs.core.windows.net/c79b0781?recursive=false&resource=filesystem
DELETE https://myadls.dfs.core.windows.net/c79b0781?resource=filesystem
My second Test with directory creation
PUT https://myadls.dfs.core.windows.net/c79b0781?resource=filesystem
PUT https://myadls.dfs.core.windows.net/c79b0781/abc/123?resource=directory
After this point, the calls begin rejecting HTTP verbs
GET https://myadls.dfs.core.windows.net/c79b0781?recursive=false&resource=filesystem
Examining my directory create request closer, it looks like this:
PUT https://myadls.dfs.core.windows.net/c79b0781/abc/123?resource=directory
With Headers:
Authorization: [omitted]
Content-Length: 0
And I can see the folders in Storage explorer, I just cannot act on them after this point.
Test Case 2
I have started down a path wondering if it is permissions. So, I created a new File System through the Azure Storage Explorer with abc/def folder structure within.
Test 1 (passing)
Get List for directory "abc"
Get List for directory "abc/def"
Test 2 (failing)
Create Directory "uvw/xyz"
Get List for directory "abc" Fails here
Get List for directory "abc/def"
Get List for directory "uvw/xyz"
Once I create a directory through the api, it is as if the entire filesystem begins rejecting all HTTP requests.
This bug ended up leading me down a rabbit hole into my Flurl implementation that I am using for performing rest requests.
The Put method had no body and was calling PutJsonAsync where according to the spec, it expects the content type to be application/octet-stream with content length 0.
I replaced the call to PutJsonAsync to PutAsync and everything magically started working.
So, there seems to be some bug within Flurl itself that caused this issue, due to my misuse in my wrapper code.

I am getting error AccessRules: Account does not have the right to perform the operation when I am using postman to hit the register api of ejabberd

What version of ejabberd are you using?
17.04
What operating system (version) are you using?
ubuntu 16.04
How did you install ejabberd (source, package, distribution)?
package
What did not work as expected? Are there error messages in the log? What
was the unexpected behavior? What was the expected result?
I used postman to make a HTTP request to ejabberd register api. The ejabberd is set up and the admin is running properly at the url - http://localhost:5280/admin.
The Url of http request is - http://localhost:5280/api/register
Body - {
"user": "bob",
"host": "example.com",
"password": "SomEPass44"
}
Header - [{"key":"Content-Type","value":"application/json","description":""}]
Response - {
"status": "error",
"code": 32,
"message": "AccessRules: Account does not have the right to perform the operation."
}
I searched a lot to and figured out that it will require some changes in ejabberd.yml file. My yml file is available on the link attached.
THIS LINK CONTAINS YML FILE
ANY HELP WILL GREAT.
In config file /opt/ejabberd/conf/ejabberd.yml
Find api_permissions
Change values of public commands who and what. Compare your code with mentioned below.
see this post:
http://www.centerofcode.com/configure-ejabberd-api-permissions-solve-account-not-right-perform-operation-issue/

get all queue names for activemq in java

I'm trying to get all the names of the queues for activeMQ in java, I found couple topics here and here about that and people suggested using DestinationSource which I wasn't able to import in Eclipse when I was writing the code. I tried:
import org.apache.activemq.advisory.DestinationSource;
I'm using java 1.7 and latest activemq version 5.14.1. Any ideas if destinationsource is still supported or not?
Thanks,
The easiest way to get a handle on this information is by using Jolokia, which is installed by default. To do this, use an HTTP client to issue a GET request to one of the following URIs:
http://localhost:8161/api/jolokia/search/*:destinationType=Queue,*
http://localhost:8161/api/jolokia/search/*:destinationType=Topic,*
You will need to pass in the JMX username and password (default: admin/admin) as part of the HTTP request. The system will respond with something along the lines of:
{
"request" : {
"mbean" : "*:destinationType=Queue,*",
"type" : "search"
},
"status" : 200,
"timestamp" : 1478615354,
"value" : [
"org.apache.activemq:brokerName=localhost,destinationName=systemX.bar,destinationType=Queue,type=Broker",
"org.apache.activemq:brokerName=localhost,destinationName=systemX.foo,destinationType=Queue,type=Broker",
"org.apache.activemq:brokerName=localhost,destinationName=ActiveMQ.DLQ,destinationType=Queue,type=Broker"
]
}
The above shows the queues systemX.foo, systemX.bar, ActiveMQ.DLQ. Here's an example of using the curl command for this:
curl -u admin http://localhost:8161/api/jolokia/search/*:destinationType=Queue,* && echo ""
For a good explanation of how to use the Jolokia APIs, refer to the documentation.
The feature is still supported in the ActiveMQ project, with the caveat that it may not always work based on comments already given here. If you have advisory support enabled on the Broker then it should provide you with some insight into the destinations that exist, although JMX would give you more management of said destinations.
There's unit tests that show the DestinationSource feature which you can refer to. You need to put the 'activemq-client' jar in the class path so perhaps your IDE project isn't configured properly.

multiple file upload to bigquery

I am trying to do multiple file upload simultaneously to google big-query using command line tool. I got following error :
BigQuery error in load operation: Could not connect with BigQuery server.
Http response status: 503
Http response content:
Service Unavailable
Any way to workaround this problem ?
How do I upload multiple files simultaneously to google big-query using command line tool.
Multiple file upload should work (and we use it every day). If you're getting a 503, that indicates something is wrong with the service. One thing you might want to make sure of is that if you're using a * in your command line that you have it quoted so that the shell doesn't expand it automatically before it gets passed to bq.
If you're getting a 503 error, can you retry the command the flag --apilog=- (this needs to be one of the first params) which will dump the interaction with the server to stdout. The problem may be obvious from that log, but if it isn't can you update your question with the relevant portions of the log? If you're not comfortable posting that information on a public forum, can you e-mail it to me at tigani at google dot com?