For many months, my OAuth2 routine has been working perfectly. The OAuth2 routine creates new tokens so that I can carry out API operations.
Suddenly, a week ago, I started getting the following error:
Charset [empty string]
ErrorDetail I/O Exception: peer not authenticated
Filecontent Connection Failure
Header [empty string]
Mimetype Unable to determine MIME type of file.
Responseheader
struct [empty]
Statuscode Connection Failure. Status code unavailable.
Text YES
I am using Coldfusion 10 and I have not changed anything in my testing environment for several months, so the change must have come from PayPal's end.
I am using the following http call:
<cfset clientid = "***">
<cfset secret = "***">
<cfhttp method="post" url="https://api.sandbox.paypal.com/v1/oauth2/token" result="result">
<cfhttpparam type="header" name="Content_Type" value="application/json">
<cfhttpparam type="formfield" name="grant_type" value="client_credentials">
<cfhttpparam type="header" name="Authorization" value="Basic #ToBase64(clientid & ":" & secret)#">
</cfhttp>
Can anyone help me to solve how this problem?
Okay I had exactly the same issue with connecting to the new PayPal REST APIs and the reason behind the error is the migration from Verisign G2 Root certificate (which PayPal no longer supports) and the move to SHA-256 algorithm and Verisign G5 signed certificates.
The confusion comes in that Coldfusion 10 and Coldfusion 11 already have the cacerts in the ColdFusion Truststore so why is it still not working?
After hours of trying and searching, I discovered the certificate issue lies in the JRE folder, not ColdFusion. That quickly led me to upgrade ColdFusion to run on the latest version of Java JDK 1.8_101 (my test server was running on JVM 1.7 and the production server was on 1.8_25 (so I upgraded both and the code ran (which was similar to Charles code in the original post).
So here are the simple steps:
Upgrade ColdFusion to the latest update from ColdFusion Administrator
Install the latest Java JDK (currently 1.8_101) and remember where you install it
Go back into ColdFusion Administrator and go to Java and JVM under Server Settings and point the JVM to the JRE folder in the new JDK e.g. /{JDK_home}/Contents/Home/jre folder and then restart ColdFusion.
The PayPal oAuth2.0 will work again! (At least it did for me). I hope this helps someone save hours of frustration and be reassured that the latest PayPal REST API does work with ColdFusion (even if they don't provide an example - I am working on submitting it to PayPal via Github shortly).
Just to let everyone know, once I installed CF11, the PayPal token was issued without a problem. Obviously, the CF11 cacerts and security providers are compatible. Now, I must try and sort out Railo, which could be more difficult, as I am on Railo 4. I will try updating Railo to its most recent 4+ version...
Update:
To get this to work on Railo, you need to do a clean install of Lucee 4.5 [Railo 4.2 successor]. DO NOT UPDATE FROM RAILO TO LUCEE BY MOVING .JARs. I repeat you need to carry out a clean install of Lucee 4.5:
http://lucee.org/downloads.html
I then came across an issue with the BonCode adapter.
If you get the following error from IIS:
IIS Handler "BonCode-Tomcat-CFM-Handler" has a bad module
Check your IIS Application Pools. In the Application Pool, click on "Basic Settings" on the panel to the right. If the .NET Version is 2.0.0 change it to 4.X and save the change.
TIP:
Make sure your web.config file has the following setting to view this error:
<configuration>
<system.webServer>
<httpErrors errorMode="Detailed"/>
</system.webServer>
</configuration>
This should save you a week's work:)
Good luck all!
Related
I am trying to understand/reproduce Log4shell vulnerability, using this poc and also information from Marshalsec.
To do that, I've downloaded Ghidra v10.0.4, which is said (on Ghidra download page) to be vulnerable to log4shell. Installed it on an ubuntu VM, along with java 1.8 (as stated in POC), and loaded the Poc + marshalsec snapshot.
Tried to start Ghidra, it said java 11 was needed, so although I've installed java 1.8 I still downloaded java 11 and, when you start ghidra, it says the installed version is not good enough and ask for the path to a java11 version; so I just gave him path to the jdk11 directory and it seems happy with it. Ghidra starts alright.
Then set up my listener and launched the poc, got the payload string to copy/paste in ghidra, and got a response in the ldap listener saying it'll send it to HTTP. But nothing more. The end.
Since the HTTP server is set up by the same POC, I thought maybe I just couldn't see the redirection, so I started the http server myself, started the ldap server myself with marshalsec, and retried (see pics below for exact commands/outputs).
Setting http server:
Set listener:
Setting LDAP server:
Send payload string in Ghidra (in the help/search part, as shown in kozmer POC); immediately got an answer:
I still receive a response on the LDAP listener (two, in fact, which seems weird), but nothing on the HTTP. The the Exploit class is never loaded in ghidra (it directly sends me a pop-up saying search not found, I think it is supposed to wait for the server answer to do that?), and I get nothing back in my listener.
Note that I don't really understand this Marshalsec/LDAP thing so I'm not sure what's happening here. If anyone have time to explain it will be nice. I've read lot of stuff about the vuln but it rarely goes deeply into details (most is like: the payload string send a request to LDAP server, which redirect to HTTP server, which will upload the Exploit class on the vulnerable app and gives you a shell).
Note: I've checked, the http server is up and accessible, the Exploit.class file is here and can be downloaded.
Solved it.
Turned out for log4shell to work you need a vulnerable app and a vulnerable version of Java; which I thought I had, but nope. I had Java 11.0.15, and needed Java 11 (Ghidra need Java 11 minimum, only vulnerable version of Java 11 is the first one).
Downloaded and installed Java 11, POC working perfectly.
I inherited an existing project without its development environment. I have UAT code and a backup of the Production database. I can run up the site locally via Visual Studio but have hit an authentication problem trying to setup a fresh standalone DEV server on AWS (single server, no load balancer). The doco indicates the Prod server is a dual server setup with a load balancer.
The front end site pages do display, although some search is not working. On trying to log into the backend pages, Chrome returns "The xxx page isn't working. xxx redirected you too many times." Using developer tools, I can see the page redirects back and forth between SWT?realm=... and sitefinity?wrap_defalted=true&wrap_access_token... On the second redirect response header there is "X-Authentication-Error:Missing configuration for the issuer of security tokens 'https://xxx/Sitefinity/Authenticate/SWT' "
I tried different values in the web.config lines:
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="http://localhost" realm="http://localhost" requireHttps="true"/>
<cookieHandler requireSsl="false"/>
</federatedAuthentication>
but that actually made things worse so I have reverted.
I checked all the settings mentioned in http://docs.sitefinity.com/administration-switch-to-claims-based-authentication and they seem to be set correctly. I don't really know what else I can check to get this working.
I found http://docs.sitefinity.com/administration-configure-security, but it does not seem like these settings are set (I don't have access to Prod server so can't confirm if it is actually setup with load balancing). I am currently using a 30 day trial license so am not sure if this is contributing to the problem. The official license is in the process of being transferred by the client. The domain name associated with the official license would be different to the domain my new server is currently running on.
I am also running version 8 code on a version 9 install of Sitefinity. I wanted to get it working before I tried to upgrade the code. I think there was also an assembly load to manifest mismatch when I tried upgrading my local version.
Found the solution: Don't mess with the SecurityConfig.config file.
<securityTokenIssuers>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" membershipProvider="Default" realm="http://localhost" />
</securityTokenIssuers>
<relyingParties>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" realm="http://localhost" />
</relyingParties>
Even though it is running on a server, the above lines should still point to localhost. It seems like these only need to be edited if you have a multi-server setup with an entirely separate STS.
I initially changed it to match the new domain name, but after some experimentation around adding localhost and HTTP variations, it seems like it works best with just localhost.
Even when I changed the web.config entry above to use the new domain as the issuer instead of localhost and the SecureConfig.config to specify only the new domain as the realms, it didn't seem to work. I guess the authentication must try to hit localhost specifically.
Because of a code mistype I accidentally removed the "JsafeJCE" cert provider from my ColdFusion 9 server. Is there any way to restore it? If yes, how?
Server and ColdFusion service have been restarted without a result.
While I Googled the problem I've read that in ColdFusion 9 Enterprise I just have to restart the ColdFusion service. But this didn't help. The provider is gone when I try to access it using:
<cfset local.objSecurity = createObject("java", "java.security.Security") />
<cfdump var="#local.objSecurity.getProviders()#">
It is ColdFusion 9 Standard on a Windows Server 2008 R2. The JRE is 1.6.0_17.
Thank you in advance.
The source of the problem was that I wasn't able to make a cfhttp request using https. So I tried a possible solution found on the web and mistyped the code. It suggested to remove the bugged ssl provider, run the request and reinsert the provider back to the jvm. Because it seems I've deleted the JsafeJCE provider from my native CF9 JRE I had to solve it in another way.
I now solved it, updating the server to JDK1.7.0_79. Now the cfhttp works fine with https.
Doing that you have to change the JRE path in the jvm.config (located in {CF9_installpath}/runtime/bin) to the new jre directory of the JDK1.7.0_79 directory.
The last step is to copy the msvcr100.dll from the new JDK's bin directory to your Cold Fusion's jre directory.
If anyone is interested in a step by step tutorial for updating the underlying JRE in Cold Fusion 9, please let me know.
I'm running a site and several sub domains on ColdFuson 10 Standard, we have just upgraded from ColdFusion 9 where everything was working fine.
The sites have a wildcard ssl certificate installed so all the sub domains are secured. I'm making http calls between the sites but getting the I/O Exception: peer not authenticated error.
Charset [empty string]
ErrorDetail I/O Exception: peer not authenticated
Filecontent Connection Failure
Header [empty string]
Mimetype Unable to determine MIME type of file.
Responseheader struct [empty]
Statuscode Connection Failure. Status code unavailable.
Text YES
I've installed the certificate in the correct key store and verified it's there using keytool -listand restarted, but still get the authentication error.
I know this is a common problem and is usually fixed by installing the certificate and there are workarounds for ColdFusion Enterprise but I'm struggling to get this working on Standard. Any suggestions?
After many days of investigation I came across this blog post which leads to bug report #3598342.
It turns out to be an issue on Windows 2012 servers running IIS 8. There is an option on the site binding to 'Require Server Name Indication' or 'SNI'. Turning this option off allows cfhttp to connect via https.
It turns out this is an issue with the HttpClient library and has been fixed in builds 288845, 288846 and 290605. Which doesn't really help as the last update 13 only gives me build 287689.
This could be an issue I have run across where I had to import the secure server certificate into coldfusion before it would allow me to connect.
http://helpx.adobe.com/coldfusion/kb/import-certificates-certificate-stores-coldfusion.html
Import Certificate for ColdFusion10
Hope that helps. I know it helped me!
I have a simple cfhttp request (a login) going out to an SSL server:
<cfhttp url="https://www2.[domain].com/api/user/login" method="POST" port="443" >
<cfhttpparam type="formfield" name="username" value="[username]" >
<cfhttpparam type="formfield" name="password" value="[password]" >
</cfhttp>
The request fails before it begins, and the ColdFusion server says:
I/O Exception: peer not authenticated
Both development environments work smashingly. They receive the login session and then hand that to the collector process which successfully taps the remote web service for data.
After I spent a day trying to get the correct certificate into the ColdFusion stores, I had the bright idea to actually compare them to the working development environments. I looked at them (keytool -list), and they are identical.
Now that the obvious is absolved the questions I'm left with are twofold:
Is there some other certificate repository I need to check, or alternately, is there a place where I can get ColdFusion to tell me what certificate repository it needs to find the certificate IN (on the off chance it can and has been altered) or if that is even possible.
Identify and correct else could be causing this.
Are the development and production environments the same? Are they all, for example, ColdFusion 9 Standard or ColdFusion 8 Enterprise?
In my experience, this error is usually caused by one of two things:
The administrator failed to install the certificate into the cacarts repository, or they installed it into the wrong one.
ColdFusion Enterprise and ColdFusion Developer edition (for ColdFusion 8 and ColdFusion 9 both, I believe) have an issue with the built-in BSafe CryptoJ library that is installed and certain types of certificates (I have not yet been able to determine a pattern) that causes this error. There are some workarounds if this is the case.
First, I would explore the possibility that you are importing into the wrong certificate repository. It can be hard to tell which repository is being used. In your CF Admin under "Setting Summary" you should be able to find the location of the JRE that is being used. It is listed under "Java Home". Take that directory and add lib/security to the end of it and that should be the location of the cacaerts file that is being used. I say should because I have seen at least one weird situation where it was not.
I HAD the same problem and I tried everything and can't fix it. Strange is that everything worked fine then suddenly stopped working. It might be a Java update on the server causing the problem or a change of the certificate from the website the CFHTTP is trying to access.
Anyway, here is a link I setup for a "demo" of this problem:
http://www.viaromania.eu/https.cfm
As you can see, I am trying to access a HTTPS service using CFHTTP tag. And it is not working. I deleted the certificate from C:\ColdFusion9\runtime\jre\lib\security\cacerts, generated a new one from the website URL, imported back, installed "certman" under CFID/admministrator, checked the certificate, it's there... and it's listed in my test page.
If you scroll to the bottom of my test page, you'll see a similar CFHTTP to https://www.google.com and this works fine, even if there is no certificate installed on the server.
It is important to mention that the request is working just perfect on my development machine, and here I also don't have any certificate installed...
AND THIS HOW I FIXED IT
1. Updated ColdFusion 9.0.2 with this - https://helpx.adobe.com/coldfusion/kb/cumulative-hotfix-1-coldfusion-902.html
2. Installed Java JDK 1.7.0_79 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
3. Changed the Java Home in ColdFusion Administrator / Server Settings → Java and JVM from "C:\ColdFusion9\runtime\lib\jre" to "C:\Program Files\Java\jdk1.7.0_79\jre"
That's it. I don't know if it uses any certificate or not. They were installed in the "C:\ColdFusion9\runtime\lib\jre\lib\security\cacerts" and not moved from there or anything.