Unable to access AWS endpoints - apache

I am using following CURL command to access AWS api's via deltacloud:
curl --user "accesskey:secretaccesskey" -H "X-Deltacloud-Driver:ec2" -H "X-Deltacloud-Provider:us-east-1" "http://IPofdeltacloud.com:3001/api/images?format=xml"
But I am getting following error again and again:
Aws::AwsError:RequestExpired: Request has expired. Timestamp date is 2013-11-21T12:57:45.000Z
REQUEST=ec2.us-east-1.amazonaws.com:443/?AWSAccessKeyId=AKIAIQUAMKYUKBM2RDMA&Action=DescribeImages&Filter.1.Name=image-type&Filter.1.Value.1=machine&Owner.1=amazon&SignatureMethod=HmacSHA256&SignatureVersion=2&Timestamp=2013-11-21T12%3A57%3A45.000Z&Version=2010-08-31&Signature=OfCOFQKH0lBMbHA4pofIBekmRfXEysAI%2F5c8YIjugUM%3D
REQUEST ID=62efe75b-dbad-4a3c-bb0f-d49eaced2d78
/usr/local/lib/ruby/gems/1.9.1/gems/aws-2.5.6/lib/awsbase/awsbase.rb:579:in `request_info_impl'
/usr/local/lib/ruby/gems/1.9.1/gems/aws-2.5.6/lib/ec2/ec2.rb:179:in `request_info'
/usr/local/lib/ruby/gems/1.9.1/gems/aws-2.5.6/lib/awsbase/awsbase.rb:593:in `request_cache_or_info'
/usr/local/lib/ruby/gems/1.9.1/gems/aws-2.5.6/lib/ec2/ec2.rb:207:in `ec2_describe_images'
/usr/local/lib/ruby/gems/1.9.1/gems/aws-2.5.6/lib/ec2/ec2.rb:252:in `describe_images_by_owner'
/root/testgit/deltacloud/server/lib/deltacloud/drivers/ec2/ec2_driver.rb:155:in `block in images'
/root/testgit/deltacloud/server/lib/deltacloud/drivers/exceptions.rb:173:in `call'
/root/testgit/deltacloud/server/lib/deltacloud/drivers/exceptions.rb:173:in `safely'
/root/testgit/deltacloud/server/lib/deltacloud/drivers/ec2/ec2_driver.rb:154:in `images'
/root/testgit/deltacloud/server/lib/deltacloud/helpers/deltacloud_helper.rb:58:in `block in filter_all'
/usr/local/lib/ruby/1.9.1/benchmark.rb:280:in `measure'
Can anyone suggest me what wrong is happening?

You have a time skew on your server, and AWS checks the timestamp of the request in order to prevent replay attacks.
You have to set the correct time on your server. If you're running ubuntu, you can use the following bash command:
ntpdate ntp.ubuntu.com

Related

import status lingers after GraphDB repository deleted

GraphDB Free/9.4.1, RDF4J/3.3.1
I'm working on using the /rest/data/import/server/{repo-id} endpoint to initiate the importing of an RDF/XML file.
Steps:
put SysML.owl in the ${graphdb.workbench.importDirectory} directory.
chmod a+r SysML.owl
create repository test1 (in Workbench - using all defaults except RepositoryID := "test1")
curl http://127.0.0.1:7200/rest/data/import/server/test1 => as expected:
[{"name":"SysML.owl","status":"NONE"..."timestamp":1606848520821,...]
curl -XPOST --header 'Content-Type: application/json' --header 'Accept: application/json' -d ' { "fileNames":[ "SysML.owl" ] }' http://127.0.0.1:7200/rest/data/import/server/test1 => SC==202
after 60 seconds, curl http://127.0.0.1:7200/rest/data/import/server/test1 =>
[{"name":"SysML.owl","status":"DONE","message":"Imported successfully in 7s.","context":null,"replaceGraphs":[],"baseURI":
"file:/home/steve/graphdb-import/SysML.owl", "forceSerial":false,"type":"file","format":null,"data":null,"timestamp":
1606848637716, [...other json content deleted]
Repository test1 now has the 263,119 (824 inferred) statements from SysML.owl loaded
BUT if I then
delete the repository using the Workbench page at http://localhost:7200/repository, wait 180 seconds
curl http://127.0.0.1:7200/rest/data/import/server/test => same as in step 5 above, despite repository having been deleted.
curl -X GET --header 'Accept: application/json' 'http://localhost:7200/rest/repositories' => test1 not shown.
create the repository again, using the Workbench - same settings as previously. wait 60 seconds. Initial 70 statements present.
curl http://127.0.0.1:7200/rest/data/import/server/test1 =>
The same output as from the earlier usage - when I was using the prior repository instance. "status":"DONE", same timestamp - which is prior to the time at which I deleted, recreated the test1 repository.
The main-2020-12-01.log shows the INFO messages pertaining to the repository test1, plugin registrations, etc. Nothing indicating why the prior repository instance's import status is lingering.
And this is of concern because I was expecting to use some polling of the status to determine when the data is loaded so my processing can proceed. Some good news - I can issue the import server file request again and after waiting 60 seconds, the 263,119 statements are present. But the timestamp on the import is the earlier repo instance's timestamp. It was not reset via the latest import request.
I'm probably missing some cleanup step(s), am hoping someone knows which.
Thanks,
-Steve
The status is simply for your reference and doesn't represent the actual presence of data in the repository. You could achieve a similar thing simply by clearing all data in the repository without recreating it.
If you really need to rely on those status records you can clear the status for a given file once you polled it and determined it's done (or prior to starting an import) with this curl:
curl -X DELETE http://127.0.0.1:7200/rest/data/import/server/test1/status \
-H 'content-type: application/json' -d '["SysML.owl"]'
Note that this is an undocumented API and it may change without notice.

NiFi- how to http post a PDF document

I wanted to use NiFi's posthttp/invokeHttp processor to post a PDF to an API.
But considering the following cURL request to replicate in NiFi:
curl -X POST "http://ipaddress:port/api/" -H "accept: application/json" -H
"Content-Type: multipart/form-data" -F "pdf_file=#sample.pdf;
type=application/pdf"
Which property takes the -F information in nifi attributes?
Configuration for invokehttp right now:
error:
"400 Bad Request: The browser (or proxy) sent a request that this server could not understand."
Configration for posthttp right now:
error:
server logs: readv() failed (104: Connection reset by peer) while reading upstream
In older version of nifi you will have to use your own script to build a multipart request and then use invoke to create post request. You can refer to this post for a ExecuteGroovyScript example.
https://stackoverflow.com/a/57204862
Since Nifi 1.12 you can directly use invokeHTTP by setting content-type
https://stackoverflow.com/a/69284300
When you use PostHttp/InvokeHttp you wouldn't be referencing an external file, you would be sending the content of the flow file. So you would first need to bring sample.pdf into NiFi by using GetFile or ListFile/FetchFile and then flow file coming out of those processors represents the PDF, and you would route that flow file to InvokeHttp which would POST the content of the flow file (the pdf).

Pushbullet API from cURL - invalid request

I'm working on an app using Pushbullet's API, but I'm running into odd errors when running through the sample code at https://docs.pushbullet.com/v2/pushes/.
I'm executing the following cURL command (in Windows):
curl -k -u <MY_API_KEY>: -X POST https://api.pushbullet.com/v2/pushes --header 'Content-Type: application/json' --data-binary '{"type": "note", "title": "Note Title", "body": "Note Body"}'
...but it keeps generating the following error:
{"error": {"type":"invalid_request","message":"The param 'type' has an invalid value.","param":"type","cat":"\u003e:3"}}
It also produces this error:
The other commands for the other endpoints in the documentation work fine...it's just this one.
Got any suggestions? Thanks for the help! :)
It looks like windows doesn't support those kinds of quotes on the command line. Here's an example that works:
curl https://api.pushbullet.com/v2/pushes -X POST -u <access token>: --header "Content-Type: application/json" --data-binary "{\"type\": \"note\", \"title\":\"Note Title\", \"body\": \"Note Body\"}"
I think I'm going to try to replace the curl examples with something that has less confusing behavior.
I figured it out - I don't really know why, but the cURL command wasn't working through the DOS prompt, and also wasn't working using the Postman REST client for Chrome, but I got it working in the DHC extension for Chrome. The trick was setting the Authorization header to "Basic", which resolves the Pushbullet access token to some other form, and makes a successful the HTTP request.
Hope this helps someone down the road if they run into this on Windows!

Cloudfoundry: How to migrate mongodb data to version 2.0

I am currently migrating my cloudfoundry app from the soon deprecated version 1.0 of cloudfoundry to version 2.0.
From the command line output it seems the deployment is working fine.
However, I also need to migrate my current mongodb database content.
I successfully dumped my current data using vmc tunnel mongodump, however, I'm not able to restore the data to the new database.
When I enter on the commandline
cf tunnel mongolab-xxxMyAmazingApp mongorestore
I got an error message telling me
Opening tunnel on port 10000... FAILED
CFoundry::NotStaged: 170002: App has not finished staging
For more information, see ~/.cf/crash
The crash file contains among others these lines
RESPONSE: [400]
RESPONSE_HEADERS:
connection : keep-alive
content-length : 61
content-type : application/json;charset=utf-8
date : Fri, 28 Jun 2013 15:27:56 GMT
server : nginx
x-content-type-options : nosniff
x-vcap-request-id : fad06d99-6fe0-4544-b1d1-eff53cea3ddd
RESPONSE_BODY:
{
"description": "App has not finished staging",
"code": 170002
}
>>>
cfoundry-2.1.0/lib/cfoundry/baseclient.rb:160:in `handle_error_response'
cfoundry-2.1.0/lib/cfoundry/baseclient.rb:139:in `handle_response'
cfoundry-2.1.0/lib/cfoundry/baseclient.rb:87:in `request'
cfoundry-2.1.0/lib/cfoundry/baseclient.rb:64:in `get'
cfoundry-2.1.0/lib/cfoundry/v2/base.rb:53:in `instances'
cfoundry-2.1.0/lib/cfoundry/v2/app.rb:55:in `instances'
cfoundry-2.1.0/lib/cfoundry/v2/app.rb:201:in `running_instances'
cfoundry-2.1.0/lib/cfoundry/v2/app.rb:176:in `health'
cfoundry-2.1.0/lib/cfoundry/v2/app.rb:212:in `healthy?'
cf-2.1.0/lib/tunnel/tunnel.rb:97:in `helper_healthy?'
cf-2.1.0/lib/tunnel/tunnel.rb:25:in `open!'
cf-2.1.0/lib/tunnel/plugin.rb:41:in `tunnel'
interact-0.5.1/lib/interact/progress.rb:98:in `with_progress'
cf-2.1.0/lib/tunnel/plugin.rb:40:in `tunnel'
mothership-0.5.1/lib/mothership/base.rb:66:in `send'
mothership-0.5.1/lib/mothership/base.rb:66:in `run'
mothership-0.5.1/lib/mothership/command.rb:72:in `invoke'
mothership-0.5.1/lib/mothership/command.rb:86:in `instance_exec'
mothership-0.5.1/lib/mothership/command.rb:86:in `invoke'
mothership-0.5.1/lib/mothership/base.rb:55:in `execute'
cf-2.1.0/lib/cf/cli.rb:156:in `execute'
cf-2.1.0/lib/cf/cli.rb:167:in `save_token_if_it_changes'
cf-2.1.0/lib/cf/cli.rb:155:in `execute'
cf-2.1.0/lib/cf/cli.rb:101:in `wrap_errors'
cf-2.1.0/lib/cf/cli.rb:151:in `execute'
mothership-0.5.1/lib/mothership.rb:45:in `start'
cf-2.1.0/bin/cf:13
/usr/bin/cf:23:in `load'
/usr/bin/cf:23
So what should I do to solve this problem?
Check this link:
http://support.cloudfoundry.com/entries/24464207-Problem-creating-a-tunnel-to-elephantsql
Hi, as services are now provisioned outside of Cloud Foundry via third party vendors, it is not necessary to use a tunnel to connect to it. To get the connection details for your service, log in to https://console.run.pivotal.io and navigating and find the provisioned in the associated space. Clicking the "manage" button next to the relevant service will take you the providers homepage where you should be able to obtain connection details.
For mongo services is the same. From https://mongolab.com/home you have access to your mongo services.

Couchdb bad_utf8_character_code

I am using couchdb through couchrest in a ruby on rails application. When I try to use futon it alerts with a message box saying bad_utf8_character_code. If I try to access records from rails console using Model.all it raises either of 500:internal server error,
RestClient::ServerBrokeConnection: Server broke connection: or Errno::ECONNREFUSED: Connection refused - connect(2)
Could any one help me to sort this issue?
I ran into this issue. I tried various curl calls to delete, modify, and even just view the offending document. None of them worked. Finally, I decided to pull the documents down to my local machine one at a time, skip the "bad" one, and then replicate from my local out to production.
Disable app (so no more records are being written to the db)
Delete and recreate local database (run these commands in a shell):
curl -X DELETE http://127.0.0.1:5984/mydb
curl -X PUT http://127.0.0.1:5984/mydb
Pull down documents from live to local using this ruby script
require 'bundler'
require 'json'
all_docs = JSON.parse(`curl http://server.com:5984/mydb/_all_docs`)
docs = all_docs['rows']
ids = docs.map{|doc| doc['id']}
bad_ids = ['196ee4a2649b966b13c97672e8863c49']
good_ids = ids - bad_ids
good_ids.each do |curr_id|
curr_doc = JSON.parse(`curl http://server.com:5984/mydb/#{curr_id}`)
curr_doc.delete('_id')
curr_doc.delete('_rev')
data = curr_doc.to_json.gsub("\\'", "\'").gsub('"','\"')
cmd = %Q~curl -X PUT http://127.0.0.1:5984/mydb/#{curr_id} -d "#{data}"~
puts cmd
`#{cmd}`
end
Destroy (delete) and recreate production database (I did this in futon)
Replicate
curl -X POST http://127.0.0.1:5984/_replicate -d '{"target":"http://server.com:5984/mydb", "source":"http://127.0.0.1:5984/mydb"}' -H "Content-Type: application/json"
Restart app