libcurl returns CURLE_URL_MALFORMAT with # in file name - filenames

When I try to transfer file using FTPS and destination file name has character # in your name the transfer fail with this error:
3/URL using bad/illegal format or missing URL
How inform libcurl to do not reject this character in destination file name?

You must URL-encode any '#' symbol as %23 in the URL to make it a URL when the letter is supposed to be part of the path, as any unencoded '#' symbol will be considered marking where the "fragment" starts.
This is dictated by RFC3986.

Related

Openshift configure "Request Header" authentication

I want to configure Openshift authentication through Request Header. I have tried modifying the master-config.yaml file as mentioned at Request Header but it's giving certificate errors so I need help on how to bypass error or how to get the certificates supported by Openshift. I updated only following stanza.
identityProviders:
- challenge: true
login: true
mappingMethod: claim
name: my_request_header_provider
provider:
apiVersion: v1
kind: RequestHeaderIdentityProvider
challengeURL: https://host:port/api/user/oauth/authorize?${query}
loginURL: https://host:port/api/user/oauth/authorize?${query}
headers:
- x-auth-token
I have used following command to restart the openshift
openshift start master --config=/etc/origin/master/reqheadauthconfig/master-config.yaml
Getting following errors
Warning: oauthConfig.identityProvider[0].provider.clientCA: Invalid value: "": if no clientCA is set, no request verification is done, and any request directly against the OAuth server can impersonate any identity from this provider, master start will continue.
Invalid MasterConfig /etc/origin/master/reqheadauthconfig/master-config.yaml
etcdClientInfo.urls: Required value
kubeletClientInfo.port: Required value
kubernetesMasterConfig.proxyClientInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.proxy-client.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/master.proxy-client.crt: no such file or directory
kubernetesMasterConfig.proxyClientInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.proxy-client.key": could not read file: stat /etc/origin/master/reqheadauthconfig/master.proxy-client.key: no such file or directory
masterClients.openShiftLoopbackKubeConfig: Invalid value: "/etc/origin/master/reqheadauthconfig/openshift-master.kubeconfig": could not read file: stat /etc/origin/master/reqheadauthconfig/openshift-master.kubeconfig: no such file or directory
oauthConfig.masterCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca.crt: no such file or directory
serviceAccountConfig.privateKeyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/serviceaccounts.private.key": could not read file: stat /etc/origin/master/reqheadauthconfig/serviceaccounts.private.key: no such file or directory
serviceAccountConfig.publicKeyFiles[0]: Invalid value: "/etc/origin/master/reqheadauthconfig/serviceaccounts.public.key": could not read file: stat /etc/origin/master/reqheadauthconfig/serviceaccounts.public.key: no such file or directory
serviceAccountConfig.masterCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca-bundle.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca-bundle.crt: no such file or directory
servingInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.server.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/master.server.crt: no such file or directory
servingInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/master.server.key": could not read file: stat /etc/origin/master/reqheadauthconfig/master.server.key: no such file or directory
servingInfo.clientCA: Invalid value: "/etc/origin/master/reqheadauthconfig/ca.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/ca.crt: no such file or directory
controllerConfig.serviceServingCert.signer.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/service-signer.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/service-signer.crt: no such file or directory
controllerConfig.serviceServingCert.signer.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/service-signer.key": could not read file: stat /etc/origin/master/reqheadauthconfig/service-signer.key: no such file or directory
aggregatorConfig.proxyClientInfo.certFile: Invalid value: "/etc/origin/master/reqheadauthconfig/aggregator-front-proxy.crt": could not read file: stat /etc/origin/master/reqheadauthconfig/aggregator-front-proxy.crt: no such file or directory
aggregatorConfig.proxyClientInfo.keyFile: Invalid value: "/etc/origin/master/reqheadauthconfig/aggregator-front-proxy.key": could not read file: stat /etc/origin/master/reqheadauthconfig/aggregator-front-proxy.key: no such file or directory
2 things I have to share with you here.
for the provider.clientCA error: ClientCA is required for RequestHeader identity provider since OpenShift api need it to verify clients which pass request with "x-auth-token" http header.
For all the files with "no such file or directory" error: I think you just make a copy for /etc/origin/master/master-config.yaml, but all files is in relative path format, so the error comes here.

Utf 8 in webmin

I am trying to run scripts via webmin but I am getting utf issues.
The error is as follows:
stdout encoding 'ascii' detected. googler requires utf-8 to work properly. The wrong encoding may be due to a non-UTF-8 locale or an improper PYTHONIOENCODING. (For the record, your locale language is <unknown> and locale encoding is <unknown>; your PYTHONIOENCODING is not set.)
Please set a UTF-8 locale (e.g., en_US.UTF-8) or set PYTHONIOENCODING to utf-8.
I converted the iso8859 to EN. utf-8 in webmin lang folder. Webmin language Configuration does not show the utf8 file.
Is there any way I can run utf-8 in webmin?
Thank you.

linux curlftpfs, password with '#'

I want to mount an FTP drive, but my FTP password contains "#".
I enter the command:
curlftpfs myaccount:my#password#thefptserver.com mnt/my_ftp
But it gives me "Error connecting to ftp: Could not resolve host: password"
How do I fix this, no combination of wrapping things in '' work either they are completely ignored
curlftpfs myaccount:my%40password#thefptserver.com:/ /mnt/my_ftp ...
where %40 is hex for character # .

How to configure custom access-log format for Glassfish-4.1 (worked in v3.1.2.2, ignored in v4.1)

I am converting some in-house systems from Glassfish-3.1.2.2 (on Java 1.7) to Glassfish-4.1 (on Java 1.8). We need a custom access log format to capture some data not present in the default log format specifier.
Glassfish-4.1 seems to be ignoring the format specifier (and for that matter, all other custom settings) in the "access-log" element in the domain.xml file. These config options worked flawlessly in Glassfish-3.1.2.2.
Specifically, consider the following from a working Glassfish-3.1.2.2 system, in the "..../configs/domain.xml" file. Certain values have been redacted, but the actual text is not relevant.
<configs>
<config name="server-config">
<http-service access-logging-enabled="true">
<access-log buffer-size-bytes="128000" write-interval-seconds="1" format="'%client.name% %datetime% %request% %status% %response.length% %session.com.redacted.redacted.User% %header.user-agent% %header.X-REDACTED%'"></access-log>
This works great in Glassfish-3.1.2.2. However, in Glassfish-4.1, the settings (format and write-interval-seconds) seem to be ignored (not sure how to test 'buffer-size-bytes').
I can see the custom access-log format string in the Glassfish-4.1 admin console: (https://host:4848/) -> Configurations -> server-config -> HTTP Service -> Access Logging -> Format.
I performed a few experiments (all failed).
I placed the format string into the "default-config" as well. This is contrary to the documentation (for GF-3, the "default-config" is used as a template to create new configs for new domains, and it NOT used by any running domain). As expected, this edit had no effect on the actual access log file (post service restart).
I edited the log format string from the admin web interface. I appended the static string "ABC123TEST", saved the config and restarted the server. Sure enough, the literal text "ABC123TEST" appears in the correct location in domain.xml, but it totally ignored when the access logfile is written out.
Example of incorrect access log file (some data edited for secrecy):
"1.2.3.4" "NULL-AUTH-USER" "09/Jun/2015:10:59:10 -0600" "GET /logoff-action.do HTTP/1.1" 200 0
Correct/desired access log sample:
"1.2.3.4" "09/Jun/2015:11:00:01 -0600" "GET /logoff-action.do;jsessionid=0000000000000000000000000000 HTTP/1.1" 200 0 "REDACTED-USER-NAME" "AwesomeUserAgentStr/1.0" "REDACTED-X-HEADER-VALUE"

Sendmail authentication failed [# in password]

When sendmail is configured with password that starts with the character #, authentication is failed. Sendmail throwed an error that "AUTH=client, available mechanisms do not fulfill requirements".
Is this is a known issue.?
Is that a restriction with sendmail or ssl authentication or rules parsing?
Sample default-auth-info file :-
sendmailtest#gmail.com
sendmailtest#gmail.com
#12345678
smtp.gmail.com:587
LOGIN PLAIN DIGEST-MD5 CRAM-MD5 NTLM
LINUX platform
Sendmail version : 8.14.0
sasl version : 2.1.22
Thanks in advance for the help..
It seems that readauth function in sendmail/usersmtp.c file ignores lines starting
with #.
BTW Have you considered using FEATURE(authinfo) instead of confDEF_AUTH_INFO/DefaultAuthInfo?
Anyway makemap command also by default treats # as a comment indicator but it may be changed using -D command line option.