What is the best way to know which protocols are supported by curl in python? - pycurl

I am building a python application that is used to download remote files. In most cases, we use pycurl to do the actual download but we need to define a class that wraps the pycurl object. The class can handle several protocols (HTTP(S), FTP(S) and SFTP).
We have noticed that on some distributions (for example Ubuntu 18.04), cURL doesn't support the SFTP protocol. So using some SFTP-related options (SSH_KNOWNHOSTS for instance) leads to crashes (the crash occurs when setting the option before the download even if the URL uses another protocol). Therefore we need to know which protocols are available when the class is defined (i.e. when importing the module).
What is the best way to know, in python, which protocols are supported by cURL ? I know that the output of pycurl.version_info() contains the supported protocols (item 8) but is there a better way ?

pycurl does not track or check which protocols are supported by libcurl.

Related

Using Apache VFS Library Get File Size (Symbolic Link)

I utilize the Apache VFS library to access files on a remote server. Some files are symbolic links and when we get the file size of these files, it comes back as 80 bytes. I need to get the actual file size. Any ideas on how to accomplish this?
Using commons-vfs2 version 2.1.
OS is Linux/Unix.
You did not say which protocol/provider you are using. However it most likely also does not matter: none of them implement symlink chasing as far as I know (besides local). You only get the size reported by the server for the actual directory entry.
VFS is a rather high level abstraction, if you want to commandeer a protocol client more specially, using commons-net or httpclient or whatever protocol you want to use gives you much more options.

Delphi Apache-Module with SSO

we successfully created an apache module with Embarcadero Delphi (10.3). Next step is the idea to extend this module with SSO-functionality (NTML/Kerberos).
I understand there are several modules for apache to enable the sso-features for php/html-content and directories by extending the httpd.conf-file (or even locations like those the module uses).
But i have no idea how to access the apache-server-variables or the information about the sso-credentials (windows logon-name) from inside my apache-module.
Perhaps someone can give me a hint here.
Possible alternatives:
Recode the negotiate-handshakes (ntml/krb) inside the module (already did this for indy)
Use a little php-script file to access the variables (with
redirect/ajax for example)
Somehow (would not know how) add those information to the request-headers inside apache before going into the module (sounds insecure)
But i would like to use an easier way ;)
Thanks
For the xxm project (which also has an Apache httpd module!) I've implemented NTLM authentication using the AcquireCredentialsHandle and AcceptSecurityContext calls.
It works using the WWW-Authenticate request and response values. First there's NTLM, followed by one or more round-trips with NTLM followed by a space and base64 encoded data you need to pass until you get a SEC_E_OK value back.

Linux Kernel Core Implementation

How to Insert a HOP BY HOP OPTION Extension header into a IPv6 frame in the Linux kernel .
Implementing it through IPtables using Netfilter framework (i.e) using mangle chain and Output hook is a better option or should i write a code for including it as a patch into the Linux kernel.
I have been trying to find the implementation of this option in Linux by traversing the code regarding transport and network layer, Couldn't.
IPV6 frame
Generated packets
Kindly suggest me a better way of implementing this.
From a quick glance at the code it should be possible to set hop-to-hop options using setsockopt().
I've not tried to work out how to do it exactly, but net/ipv6/ipv6_sockglue.c handles IPV6_HOPOPT in do_ipv6_setsockopt().
You will need to be root (or have CAP_NET_RAW at least) to do so.

When do I need use tomcat-coyote.jar - Coyote API?

when do I need load the jar tomcat-coyote API in the webserver, for what reason?
I brought this question due to a third-party product that makes use of Coyote API, I guess for some kind of connector but I`m not sure what?
It can be any one of a number of things. That JAR does contain the HTTP and AJP connector implementations but it also has a number of utility classes such as a packaged renamed copy of Apache Commons BCEL used for annotation scanning, some optimized collection implementations, various HTTP utilities (cookies, file upload, header parsing, parameter parsing, etc.) to name but a few.
The quick way to figure out what it is using is to remove the JAR and look for the ClassNotFoundExceptions.

Nginx upload installation error

I am on Mac OSX Lion using Nginx 1.4.1. I am using nginx in conjunction with Tornado.
In the process of installing the Nginx upload module (v. 2.2.0) I encountered some compatibility issues. See this reference for more info. Apparently, there is no great fix for this as of yet. My specific error is rooted in: error: no member
named 'to_write' in 'ngx_http_request_body_t'
Is there a way to make the two of these reliably compatible without jumping through hoops?
Or, is there a suitable alternative to using this upload module that will work with Nginx 1.4.1?
If not, should I considering using Nginx 1.3.8? And if so, where can I download this version? I do not see it available for download on their website here.
Thank you for the help. Regards.
1) No, it doesn't seem like there is as the maintainer of nginx-file-upload has implied he doesn't want to maintain it any more.
2) I found this article which lists some alternatives. One of which is nginx-big-upload I've not tried it yet.
3) Well you could consider it but then you're tied in to a package that isn't maintained. What happens if there's a security vulnerability for 1.3.8? You can't upgrade without either patching or changing your file upload strategy. If you want to, you can find all of the older Nginx versions here
The situation is pretty frustrating at the moment but there are options, just none of them are tried and true. When dealing with production systems stability and security are key.
1) Yes, this module dose not support for nginx 1.4+.
2) The reason is that nginx support chunked of thansfer-encode, and improve its code design. that it remove the field to_write of ngx_http_request_body_t struct.
3) https://github.com/hongzhidao/nginx-upload-module. This is an alter module. It support the latest nginx, and the feature is equal.