I am having trouble properly setting headers for the Bitfinex API (https://www.bitfinex.com/pages/api). I have no trouble with the un-authenticated Get calls but I cannot get my authenticated Post calls working. An example call that I am working with is a Post to "/balances". I am hoping that somebody who uses the API can help me with what I am doing wrong. Here is some sample input and output (fake keys of course) that I am currently generating:
Private Key:
012345abcdef
API Key:
000111aaafff
Payload:
{"request": "/v1/balances","nonce": "1413737362"}
Base64 Payload:
e3JlcXVlc3Q6IC92MS9iYWxhbmNlcyxub25jZTogMTQxMzczNzM2Mn0=
Using the OpenSSL command:
echo -n 'e3JlcXVlc3Q6IC92MS9iYWxhbmNlcyxub25jZTogMTQxMzczNzM2Mn0=' | openssl dgst -hmac 012345abcdef -sha384 -hex
to get a signaure of
b18953370fad9bd5dd482d6ae07aeb96fdebd812e98cbf847f2d923bf66d1579eb31e10e1d79c7ae8405c54e28d0ae2a
So I get the Headers:
"X-BFX-APIKEY" "000111aaafff"
"X-BFX-PAYLOAD" "e3JlcXVlc3Q6IC92MS9iYWxhbmNlcyxub25jZTogMTQxMzczNzM2Mn0="
"X-BFX-SIGNATURE" "b18953370fad9bd5dd482d6ae07aeb96fdebd812e98cbf847f2d923bf66d1579eb31e10e1d79c7ae8405c54e28d0ae2a"
I have been trying everything I can think of and the responses I get from the API switch between "Invalid X-BFX-SIGNATURE." and "Invalid json.".
Where is the flaw in my process? I cannot see what I am doing incorrectly.
I was using a Unix system call to run the OpenSSL command. The result was returned in two lines, I was only reading the first line. Reading all lines until encountering and End of File solved the problem.
Related
I am trying to automate a json export with cURL. I am following their directions found here
I am following their directions in step 1 and using this request to start an export
curl -X POST 'https://trello.com/1/organizations/{organizationNameOrId}/exports?key={key}&token={token}' --data 'attachments=false'
this starts an export and gives me the same output as in their example. then I go to step 2. I use this request just like they say in their directions
curl https://trello.com/1/organizations/{orgIdOrName}/exports/{exportId}?key={key}&token={token}
but instead of getting the same output as them I get a message that says
can't read the state of an export
then when I press enter I get
[1] random number
Done "then a bunch of empty space" + my original request in step 2 minus the token
has anyone else ever had this issue? I can't seem to find it anywhere
figured it out. they're missing the apostrophes around the url in step 2. it should be
curl 'https://trello.com/1/organizations/{orgIdOrName}/exports/{exportId}?key={key}&token={token}'
In this sample from Ktor website https://ktor.io/samples/feature/auth.html they use an account "test" with password "test" as example.
#UseExperimental(KtorExperimentalAPI::class)
val hashedUserTable = UserHashedTableAuth(
getDigestFunction("SHA-256") { "ktor${it.length}" },
table = mapOf(
"test" to Base64.getDecoder().decode("GSjkHCHGAxTTbnkEDBbVYd+PUFRlcWiumc4+MWE9Rvw=") // sha256 for "test"
)
)
I need to create another entry, but I can't figure out how they got that hash. I tried to sha256 the word "test", salted or not, tried to base64 the result... Nothing matches that hash so I'm unable to create another user.
Anyone could enlighten me here on how to create a compatible hash with that code?
After a lot of try and errors... Here's how to duplicate that:
echo -n ktor4test | openssl dgst -binary -sha256 | openssl base64
I hope this helps someone in the future not to waste the same time as myself.
I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.
I am trying to replicate the curl method mentioned in splunk rest api doc into R to perform search using R. Sorry, I won't be able provides details on the parameters to replicate. Hence attaching the link for reference.
curl -u admin:changeme -k https://localhost:8089/services/search/jobs -d search="search *"
This returns me a sid from curl. However when I try to replicate the same in R using httr it returns list of all search details. I have tried using both POST & GET in httr just in case. Below is sample code. Ideally below one should return me a sid. however it returns list of existing search details. Not sure what I am missing. I am new to Rcurl,httr. I tried curlperform as well, there also same. Seems, something is missing out. What exactly -d does in curl, is this the thing I am missing to replicate ?
response <- GET(splunk_server,path=search_job_export_endpoint,
config(ssl_verifyhost=FALSE, ssl_verifypeer=0),
authenticate(username, password),
query=list(search=urlencode(search_terms)),
verbose())
result <- read.table(text=content(response, as="text"), sep=",", header=TRUE,
stringsAsFactors=FALSE)
I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.