I am working with the Let's Crate API, it uses basic Auth and it gives examples on how to use it in CURL:
Takes file and crate_id as
parameters. file is the file data.
E.g. curl -F file=#filename.txt -F
crate_id=123
https://api.letscrate.com/1/files/upload.json
But I have never used CURL before, so how would i make my Mac application when the API involves CURL, I know this is most likely a very simple, question, but I think the fact that I have never used CURL before is confusing me.
I can get it to work using terminal by using this:
curl -u <username:password> -F file=#<filename> -F crate_id=<crate ID(number)> https://api.letscrate.com/1/f
but i want the user to login, and then just click upload and select the file they want, and then send it, nothing overly hard, i just don't know how to get the CURL to play nice with the Objective C.
Thanks in advance, I hope you can help.
You need to use libcurl if you want CURL within an Objective-C application. libcurl is CURL's library and, since it's written in C it plays nicely with Objective-C.
Related
The question maybe rudimentary but I'm not familiar with how it could be done efficiently. I am trying to understand how a docker image works which would expose a variety of endpoints as follow:
http://localhost:8080/predictions/{something}
or
http://localhost:8080/metrics/{something}
What is the command or tool that I could use to obtain all the possible option for that {something} that returns a healthy request?
Given:
Connection to the Uni's secure shell like this:
me#my_computer~$ ssh <my_name>#unixyz.cs.xy.com
Password:***********
Welcome to Unixyz. You now can access a terminal on system unixyz:
my_name#unixyz~$ ls
Desktop Documents Pictures Music desired_document.pdf
my_name#unixyz-$
Taks/Question:
Getting the desired_document.pdf to my own system. I have thought of some options so far:
1)Since i can access an editor like nano I could write a C/Java programm , compile it in the home directory and make that program send the pdf. Problem with that: Had to code a client on the Uni machine and a server on my own system. On top of that I only know how to transfer text given to the stdin and no pdf's. And its obviously too much work for the given task
2) I found some vague information about commands: scp and sftp. Unfortunately, I can not figure out how it is done exactly.
The latter is basicly my questions: Are the commands scp and sftp valid options for doing the desired and how are they used?
EDIT:
I received a first answer and the problem persists: As stated, i use:
scp me# server.cs.xyz.com:/path/topdf /some/local/dir
which gives me:
/some/local/dir: no such file or directory
I'm not sure in which environment you are.
Do you use Linux or Windows as your every-day operating system?
If you are using windows, there are some ui-based scp/ssh implementations that enable you to transfer these files using an explorer based ui.
For example there is https://winscp.net/
You can indeed use scp to do exacty that, and it's easier than it might look:
scp your_username# unixyz.cs.xy.com:path/to/desired_document.pdf /some/local/dir
The key is the colon after the servername where you add your path
Optionally you can pass in the password as well, but that's bad practice, for obvious reasons.
I actually got the answer myself and the error that I was having. Both, the guy with the answer and the commentor where right. BUT:
scp must be launched when you are in YOUR terminal, I always tried to do it while I was connected to the remote server.
2 hours wasted because of that.
I have a question about API's and cURL. I'm not sure if this is all Python, but I am trying to access JSON data using an API, but the server isn't as easy as grabbing the data with an XMLRequest... The support team gave me this line of code:
curl -k -s --data "api_id=xxxx&api_key=xxxx&time_range=today&site_id=xxxxx"
https://my.incapsula.com/api/stats/v1
And I have no idea what this even means because all the API requests I've been making was just as easy as using a link and parsing through it with some JavaScript. Can anyone break the -k -s --data for me or point me in a right tutorial?
(NOT PYTHON; Sorry guys...)
The right tutorial is the man page.
-k/--insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k/--insecure is used.
See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
-s/--silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
As for --data, well, it specify the data you are sending to the server.
This question is (for now) not related to Python at all, but eventually to shell scripting.
Until now, I used Google Calender and do my personal backup with a daily wget of the public ".ics"Link.
Now I want to switch to a new Service who has only caldavaccess.
Is there a possibility to download all my caldav and carddav data with one wget / curl?
This downloaded data should give me the possibility to backup lost data.
Thanks in advance.
edit
I created a very simple php file which works in the way hmh explained. Don't know if this way works for different providers, but for mailbox.org, it works well.
You can find it here https://gist.github.com/ahemwe/a2eaae4d56ac85969cf2.
Please be more specific, what is the new service/server you are using?
This is not specifically CalDAV, but most DAV servers still provide a way to grab all events/todos using a single GET. Usually by targeting the relevant collection with a GET, e.g. like either one of those:
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home/
curl -X GET -u login -H "Accept: text/calendar" https://myserver/joe/home.ics
In CalDAV/CardDAV you can grab the whole contents of a collection using a PROPFIND:
curl -X PROPFIND -u login -H "Content-Type: text/xml" -H "Depth: 1" \
--data "<propfind xmlns='DAV:'><prop><calendar-data xmlns='urn:ietf:params:xml:ns:caldav'/></prop></propfind>" \
https://myserver/joe/home/
Replace calendar-data with
<address-data xmlns="urn:ietf:params:xml:ns:carddav"/>
for CardDAV.
This will give you an XML entity which has the iCal/vCard contents embedded. To restore it, you would need to parse the XML and extract the data (not hard).
Note: Although plain standard, some servers reject that or just omit the content (lame! file bug reports ;-).
Specifically for people using Baïkal (>= 0.3.3; other Sabre/dav-based solutions will be similar), you can go directly to
https://<Baïkal location>/html/dav.php/
in a browser and get an html interface that allows you to download ics files, and so also allows you to find the right links for those for use with curl/wget.
I tried the accepted answer which did not work for me. With my CalDAV calendar provider I can, however, retrieve all calendar files using
wget -c -r -l 1 -nc --user='[myuser]' --password='[mypassword]' --accept=ics '[url]'
where [myuser] and [mypassword] are what you expect and [url] is the same URL as the one that you enter in your regular CalDAV software (as specified by your provider).
The command creates a directory containing all the ICS-files representing the calendar items. A similar command works for my addressbook.
I'm dealing with this server that can't seem to call CURL on itself.
To illustrate:
I have a localhost server (named http://experiments.local). When I go to terminal and do "curl http://experiments.local", that works.
Now I upload all the stuff to this server. (http://www.prod.com). When I ssh to that box and do "curl http://www.prod.com" that just hang.
Is there any setting that says no curl to self? If yes how do I turn that off?
Just to clarify:
calling "curl http://www.prod.com" from my local machine works too. So it's really only when I try doing curl from that same box.
The reason why I need that is because when a user hit the API living in www.prod.com, that API will call a 3rd party vendor that upon failure / success will hit a callback URL that we pass along to them.
Now since, I added this option to my curl call curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true); the API call just stall. Works fine on my local machine version since my local machine doesn't have that curl hanging issue.
Thank you,
Tee
I had the same problem here, turns out the solution is quite simple
Open your /etc/hosts files and you will find these two lines (the 147.4.12.20 ip is just an example, yours will be different)
127.0.0.1 something
147.4.12.20 something
Just add your domain to that lines that point to your server, it will be like this:
127.0.0.1 something prod.com
147.4.12.20 something prod.com