Telegram-cli how to handle requests through php? - telegram-bot

I do bot via telegram-cli.
Established telegram-cli on the server, everything works well.
To bot automatically posted, do so: bin/telegram-cli -k tg-server.pub -W -s query.lua
Queries to the bot go to query.lua file.
Here's how to make so that the requests made by the bot sent to a PHP file and processed by them?
I watched over here https://github.com/zyberspace/php-telegram-cli-client, but nothing has happened.

Related

Is there an API for fetching stopped containers list

I need to get all the stopped containers list via an API. but I got only commands to get the list.
If APIs are not available, suggest how we can create an API with docker commands. so whenever I hit the API, I can get the list of stopped containers.
First, if you need other pc to visit docker daemon you need to enable it in /lib/systemd/system/docker.service, like next:
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375
Second, you could use next url to paste to browser to get all containers like exit:
http://10.192.244.188:2375/containers/json?filters={"status":["exited"]}
If use curl, then you may need to url encode some special html entity like next:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D
You could also use next to make it easy to read:
curl http://10.192.244.188:2375/containers/json?filters=%7B%22status%22%3A%5B%22exited%22%5D%7D | python -m json.tool

cURL API Commands

I have a question about API's and cURL. I'm not sure if this is all Python, but I am trying to access JSON data using an API, but the server isn't as easy as grabbing the data with an XMLRequest... The support team gave me this line of code:
curl -k -s --data "api_id=xxxx&api_key=xxxx&time_range=today&site_id=xxxxx"
https://my.incapsula.com/api/stats/v1
And I have no idea what this even means because all the API requests I've been making was just as easy as using a link and parsing through it with some JavaScript. Can anyone break the -k -s --data for me or point me in a right tutorial?
(NOT PYTHON; Sorry guys...)
The right tutorial is the man page.
-k/--insecure
(SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k/--insecure is used.
See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html
-s/--silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute.
As for --data, well, it specify the data you are sending to the server.
This question is (for now) not related to Python at all, but eventually to shell scripting.

how to pass information from cli or server to fastcgi (php-fpm) and back again?

i'm trying to write a webserver. i didn't want to write a module for php, so i figured i'd pass information to php-fpm like nginx and apache does. i did some research, and setup to prototypes, and just can't get it to work.
i've set up a php service listening on port 9999 that will print_r($GLOBALS) upon each connection. i've set up nginx to pass php requests to 127.0.0.1:9999. the requests ARE being passed, but only argc (1) and argv (the path to the php service), and $_SERVER vars are populated. the $_SERVER vars has a lot of information about the current environment that the php process is acting in, but i don't see ANY information about the connected user or their request -no REMOTE_ADDR, no QUERY_STRING, no nothing...
i'm having trouble finding documentation on HOW to pass this information from the cli or from a prototype server to a fastcgi php process. i've found a list of what some of the older CGI vars are, but no information on HOW to pass them, or if any of them are outdated with fastcgi.
so, again, i'm asking HOW you pass info from your server prototype or cli to a php-fpm or fastcgi process -or, WHERE can i find proper and clear and definitive documentation on this subject? (and no, the RFC is not the answer). i've beed reading over fastcgi.com and wikipedia as well as numerous other search results...
=== update ===
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.
now, when i try to do things the other way around --write my prototype server that forms a binary fcgi packet and sends it to PHP-FPM, i get NOTHING -no error output on the cli or from the error logs (i can't ever get php-fpm to write to the error logs any way [-_-]). so, WHY wouldn't php-fpm be giving me SOME kind of response, either in error text, or in binary network packet, or ANYTHING???
so, i can SEND data from cli to to fastcgi, but i can't get any thing back, unless it's MY OWN fastcgi process (and no, i didn't take over php-fpm's port -i'm on 9999 and it's on 9000).
=============
TIA \m/(>_<)\m/
got it.
the server passes information to the fastcgi process in the form of a network data packet. much like a dns packet, you have to use the respective binary character manipulation functions for your language to formulate the packate, including it's header and payload information, then send it across the pipe / network to the fastcgi server who will then parse the binary packet into a response. fun stuff -could have been more well documented, ahem :-\
oh, and if you prototype a listener in php, you can't access this packet via any php vars, you actually have to read the connection you're listening on (because it's a binary network packet and not plain text 'post' data sent).
i've managed to get a working fastcgi "service" up and running via a prototype in php. it listens on 9999, parses a binary fcgi request from the cli and even from nginx, it formats a binary fcgi response, sends it back over the network, and the cli displays it fine, and nginx even returns the decoded fcgi response back to the browser just like nature intended.

Allowing a PHP script to ssh, using sudo

I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files. My httpd runs as _www with low permissions, so setting up direct passwordless SSH is difficult, not to say ill-advised.
The way I do it now is to have a minimal PHP script that sudo-exec's (as me) a shell script which is outside of the document root. The shell script in turn calls (as me) the PHP code that does the actual SSH work, and prints its output. Here's the code.
read_remote_files.php (The script I call from my browser):
exec('sudo -u me -n /home/me/run_php.sh /path/to/my_prog.php', $results);
print $results;
/home/me/run_php.sh (Runs as me, calls whatever it's given):
php $1 2>&1
sudoers:
_www ALL = (me) NOPASSWD: /home/me/run_php.sh
This all works, as my_prog.php is called as me and can SSH as me. It seems it's not too insecure since run_php.sh can't be called directly from a browser (outside document root). The issue I'm having is that my_prog.php isn't called as an HTTP program so doesn't have access to the HTTP environment variables (DOCUMENT_ROOT etc).
Two questions:
Am I making this too complicated?
Is there an easy way for my final script to get the HTTP variables?
Thanks!
Andy
Many systems do stuff like this using a (privileged) cron job that frequently checks for the existence of a file, a database record or some other resource, and then performs actions if there are any.
The huge advantage of this is that there is no direct interaction between the PHP script and the privileged script at all. The PHP script leaves the instructions in a resource, the privileged script fetches it. As long as the instructions can't lead to the system getting compromised or damaged, it's definitely more secure than sudoing.
The disadvantage is that you can't push changes whenever you like; you have to wait until the cron job runs again. But maybe it's an option anyway?
"I need to allow a PHP script on my local web server, to SSH to another machine to perform a specified task on some files."
I think that you are phrasing this in terms of a solution that you have difficulty in getting to work rather than a requirement. Surely what you should be saying is "I want to invoke a task on machine B from a PHP script running under Apache on Machine A." And then research solutions to this -- to which there are many from a simple 'roll-your-own' RPC tunnelled over HTTP(S) to using an XMLRPC or SOA framework.
Two caveats:
Do a phpinfo(); on both machines to check what extensions are available and
Also check your php.ini setting to make sure that your service provider hasn't disabled any functions that you expect to use (or do a Q&D script to echo 'disable_functions = ' . ini_get('disable_functions') . "\n"; ...)
If you browse here and the wider internet you'll find many examples. Here is one that I use for a similar purpose.

CURL on self not working

I'm dealing with this server that can't seem to call CURL on itself.
To illustrate:
I have a localhost server (named http://experiments.local). When I go to terminal and do "curl http://experiments.local", that works.
Now I upload all the stuff to this server. (http://www.prod.com). When I ssh to that box and do "curl http://www.prod.com" that just hang.
Is there any setting that says no curl to self? If yes how do I turn that off?
Just to clarify:
calling "curl http://www.prod.com" from my local machine works too. So it's really only when I try doing curl from that same box.
The reason why I need that is because when a user hit the API living in www.prod.com, that API will call a 3rd party vendor that upon failure / success will hit a callback URL that we pass along to them.
Now since, I added this option to my curl call curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true); the API call just stall. Works fine on my local machine version since my local machine doesn't have that curl hanging issue.
Thank you,
Tee
I had the same problem here, turns out the solution is quite simple
Open your /etc/hosts files and you will find these two lines (the 147.4.12.20 ip is just an example, yours will be different)
127.0.0.1 something
147.4.12.20 something
Just add your domain to that lines that point to your server, it will be like this:
127.0.0.1 something prod.com
147.4.12.20 something prod.com