How to POST a MultiPart/Form data from VB.NET - vb.net

I'm currently developing a VB.NET program in Visual Studio 2017, and while I'm reasonably competent with most aspects of VB.NET, this one has me completely stumped.
The program needs to POST multipart/form data to Mixcloud (which is like Soundcloud but for radio programmes). After some research, all I have found are dead ends. In Mixcloud's API for uploading (found here), they give a CURL example on how to do it. Would anyone have an idea on how to implement this in VB.NET? Many thanks in advance for any help available.
Here is their example:
curl -F mp3=#upload.mp3 \
-F "name=API Upload" \
-F "tags-0-tag=Test" \
-F "tags-1-tag=API" \
-F "sections-0-chapter=Introduction" \
-F "sections-0-start_time=0" \
-F "sections-1-artist=Artist Name" \
-F "sections-1-song=Song Title" \
-F "sections-1-start_time=10" \
-F "description=My test upload" \
https://api.mixcloud.com//upload/?access_token=INSERT_ACCESS_TOKEN_HERE
(The "sections" are essentially markers that are shown on the player to highlight different points of a radio show - just to provide some context!)

Related

Trying to Use CFEXECUTE with Curl to Post via API to Rumble

I am trying to convert this example script (from Rumble's Upload API) to using CURL:
curl -F "access_token=XXXXX" \ -F "title=A cool video" \ -F "description=Some detailed description" \ -F "license_type=0" \ -F "channel_id=123" \ -F "video=#video.mp4" \ -F "thumb=#thumbnail.jpg" \ "https://rumble.com/api/simple-upload.php"
I've been trying to get the following code working to upload videos via API to our Rumble account. I'm new to Curl and CFEXECUTE but not new to Coldfusion:
<cfexecute name = "/usr/bin/curl" arguments = "-X POST --insecure https://rumble.com/api/simple-upload.php -F access_token=#access# -F title=#titletouse# - F description=#descript# -F license_type=0 -F channel_id=#channel# -F video=##form.video#" variable="response" timeout = "999"> </cfexecute>
Most of the time the response is: [empty string]
The variables listed are required. I'm pretty sure it IS connecting to Rumble because I tried bunch of different versions and one was without POST and got back JSON response data and another was without an -F before description and got back: { "success": false, "errors": { "description": { "code": "MISSING_OR_INVALID_VALUE" } } }
For example, I tried:
with single quotes around each form field '-F license_type=0'
with -H 'Content-Type:multipart/form-data'
with semi colons between fields: -F licensetype=0;-F channel_id=#channel#
Any suggestions on what I'm doing wrong? I have tried about 30 different things.... and am out of ideas. Thank you!!!!!

Any way to use presigned URL uploads and enforce tagging?

Is there any way to issue a presigned URL to a client to upload a file to S3, and ensure that the uploaded file has certain tags? Using the Python SDK here as an example, this generates a URL as desired:
s3.generate_presigned_url('put_object',
ExpiresIn=3600,
Params=dict(Bucket='foo',
Key='bar',
ContentType='text/plain',
Tagging='foo=bar'))
This is satisfactory when uploading while explicitly providing tags:
$ curl 'https://foo.s3.amazonaws.com/bar?AWSAccessKeyId=...&Signature=...&content-type=text%2Fplain&x-amz-tagging=foo%3Dbar&Expires=1538404508' \
-X PUT
-H 'Content-Type: text/plain' \
-H 'x-amz-tagging: foo=bar' \
--data-binary foobar
However, S3 also accepts the request when omitting -H 'x-amz-tagging: foo=bar', which uploads the object without tags. Since I don't have control over the client, that'sā€¦ bad.
I've tried creating an empty object first and tagging it, then issuing the presigned URL to it, but PUTting the object replaces it entirely, including removing any tags.
I've tried issuing a presigned POST URL, but that doesn't seem to support the tagging parameter at all:
s3.generate_presigned_post('foo', 'bar', {'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'})
$ curl https://foo.s3.amazonaws.com/ \
-F key=bar \
-F 'tagging=<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'
-F AWSAccessKeyId=... \
-F policy=... \
-F signature=... \
-F file=#/tmp/foo
<Error><Code>AccessDenied</Code><Message>Invalid according to Policy:
Extra input fields: tagging</Message>...
I simply want to let a client upload a file directly to S3, and ensure that it's tagged a certain way in the process. Any way to do that?
Try the following code:
fields = {
"x-amz-meta-u1": "value1",
"x-amz-meta-u2": "value2"
}
conditions = [
{"x-amz-meta-u1": "value1"},
{"x-amz-meta-u2": "value2"}
]
presignedurl = s3_client.generate_presigned_post(
bucket_name, "YOUR_BUCKET_NAME",
Fields=copy.deepcopy(fields),
Conditions=copy.deepcopy(conditions)
)
Python code:
fields = {
'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>',
}
conditions = [
{'tagging': '<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>'}
]
presigned_url = s3_client.generate_presigned_post(
Bucket="foo",
Key="file/key.json",
Fields=copy.deepcopy(fields),
Conditions=copy.deepcopy(conditions)
)
CURL command:
$ curl -v --form-string "tagging=<Tagging><TagSet><Tag><Key>Foo</Key><Value>Bar</Value></Tag></TagSet></Tagging>" \
-F key=file/key.json \
-F x-amz-algorithm=... \
-F x-amz-credential=... \
-F x-amz-date=... \
-F x-amz-security-token=... \
-F policy=...\
-F x-amz-signature=... \
-F file=#key.json \
https://foo.s3.amazonaws.com/
Explanation
It is imperative that --form-string is used in the CURL command, otherwise CURL will interpret the =< as reading in a file!
Also ensure that key.json is in your current working directory for CURL to upload the file to S3 using the pre-signed-url.

using GNU Parallel for pagination

I like GNU Parallel and have tried to use it for pagination but need help to get it working successfully. Basically, I am following the use cases on the Quickblox API guide to get data:
http://quickblox.com/developers/Custom_Objects#Get_related_records
The maximum number of records one can retrieve is 100 per page, and one can only retrieve a page at a time. These are specified via the -d parameter. I want to use GNU Parallel to obtain pages 1..79.
I found a thread that explains how to use GNU Parallel when you have parameters that take on many different values but haven't been able to successfully adapt it to my case.
GNU Parallel - parallelize serial command line programs without changing them
Your help would be greatly appreciated!
curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page=3" -d "per_page=100" https://api.quickblox.com/users.xml > qblox_users_page3_100perpage
If you want output in different files:
parallel 'curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page={}" -d "per_page=100" https://api.quickblox.com/users.xml > qblox_users_page{}_100perpage' ::: {1..79}
If you want it in a single big file:
parallel -k 'curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page={}" -d "per_page=100" https://api.quickblox.com/users.xml' ::: {1..79} > qblox_users

Error submitting iOS .app using TestFlight API

I'm running the following script:
#!/bin/bash
archive=`./builds/myapp.ipa`
curl http://testflightapp.com/api/builds.json
-F file=$archive
-F api_token='xxxxxxxxxxxxxxxxxxxxxxxxxx'
-F team_token='xxxxxxxxxxxxxxxxxxxxxxxxxx'
-F notes='here comes the new app!'
-F notify=True
-F distribution_lists='MyFriends'
but I'm getting the error:
You must supply api_token, team_token, the file and notes (missing
file)
I'm actually copy/past-ing the script from the TestFlight website. What's wrong with that?
Please note that, as seen in the example given in the TestFlight API Documentation, you need to use the '#' character before the IPA file name.
You should try with:
#!/bin/bash
archive=`./builds/myapp.ipa`
curl http://testflightapp.com/api/builds.json \
-F file=#$archive \
-F api_token='xxxxxxxxxxxxxxxxxxxxxxxxxx' \
-F team_token='xxxxxxxxxxxxxxxxxxxxxxxxxx' \
-F notes='here comes the new app!' \
-F notify=True \
-F distribution_lists='MyFriends'

Authenticating with the new Google Analytics Core Reporting API and OAuth 2.0

Google Analytics's new "Core Reporting API" (version 3.0) "recommend[s] using OAuth 2.0 to authorize requests" (citation). Its documentation, though, is very unclear about how to do that. (It says "When you create your application, you register it with Google" (citation), but does a shell script count as an "application"?? If so, I should register the bash script at the "APIs Console", which doesn't give any guidance on how to do so.) Using Analytics' version 2.3, I run a bash script:
#!/bin/bash
# generates an XML file
googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \
-d Email=foo \
-d Passwd=bar \
-d accountType=GOOGLE \
-d source=curl-dataFeed-v2 \
-d service=analytics \
| awk /Auth=.*/)"
# ...
feedUri="https://www.google.com/analytics/feeds/data\
?ids=$table\
&start-date=$SD\
&end-date=$ED\
&dimensions=baz\
&metrics=xyzzy\
&prettyprint=true"
# ...
curl $feedUri --silent \
--header "Authorization: GoogleLogin $googleAuth" \
--header "GData-Version: 2" \
| awk # ...
How would I do something like this ā€” a script that grabs whatever login token I need and sends it back ā€” for the new Analytics?
(Incidentally, yes, I realize the results will be JSON, not XML.)
Here is a sample script
googleAuth="$(curl https://www.google.com/accounts/ClientLogin -s \
-d Email=$USER_EMAIL \
-d Passwd=$USER_PASS \
-d accountType=GOOGLE \
-d source=curl-accountFeed-v1 \
-d service=analytics \
| grep "Auth=" | cut -d"=" -f2)"
feedUri="https://www.googleapis.com/analytics/v3/data/ga\
?start-date=$START_DATE\
&end-date=$END_DATE\
&ids=ga:$PROFILE_ID\
&dimensions=ga:userType\
&metrics=ga:users\
&max-results=50\
&prettyprint=false"
curl $feedUri -s --header "Authorization: GoogleLogin auth=$googleAuth"