I've recently started using soapUI 4.5.1 and I'm still not familiar with all the settings. I've set up a few web requests and all but one are working correctly. I'm trying to send content of type application/xml but I'm getting:
org.apache.http.client.ClientProtocolException caused by org.apache.http.ProtocolException: Content-Length header already present.
The same request always worked fine for me in 4.5.0. Content of request is something like this:
POST http://exampleHost.com/exampleRequest HTTP/1.1
Accept-Encoding: gzip,deflate
Accept: text/xml
Content-Type: application/xml
Content-Length: 456
Host: exampleHost.com
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)
Followed by the xml.
I take it that content length is sent twice, but I don't know where, because I haven't set it anywhere. SoapUI is showing Additional HTTP Headers for this message as 0.
Any pointers would be great!
I got the error to go away (in SoapUI 4.5.1) by checking the Authenticate Preemptively flag in
Preferences -> HTTP Settings -> Authenticate Preemptively
I got in touch with SmartBear support, the problem seems to have been fixed in the latest nightly build available # http://soapui.org/Downloads/soapui-pro-nightly-builds.html.
I encountered the same problem using an authenticating development server that was requesting user credentials but actually accepted a blank or any other password. So I was leaving the password blank which worked fine in soapUI 4.5.0, but failed the way you described in 4.5.1. But I found that simply putting some text into the password appears to fix the problem.
Don't know if this relates to your case, but just in case it is useful.
I faced the same issue with Soap UI pro version 4.5.1.
Finally figured out that the issue was with proxy settings.
Resolution:
Adding the target server in Exclude List for Proxy Settings has
resolved the issue. This is the case even if the end point url is a
local host.
Preferences --> Proxy Settings ---> Exclude
can specify mutliple servers by comma seperated values
Research / Observation on my system:
Strangely the same test suite runs without an issue on other system
within same office. Must be something to do with the way systems are
configured.
Playing with Proxy settings in Internet Options has no
effect.
For my request, Proxy authentication was required. But when
i enable Proxy settings i get Http Client protocol exception with
duplicate content length error.
We can find this from the http log
once the request is sent...But we don't have an option to configure
it.
An interesting observation was that one of the content length
header was in Incoming Request and the other one was in Outgoing
Request...This shouldn't throw off the request though.
Another way to cause this error is to call the web service with the wrong password (I was told wrong honest) too many times and get your account locked.
As soon as the password was reset and the account unlocked the "org.apache.http.client.ClientProtocolException caused by org.apache.http.ProtocolException: Content-Length header already present" exception went away and the web service call worked as expected.
Related
When i try to test api with localhost:[port] it gives the invalid character in header ["Host"] console error. I am using dotnet core webApi. I cross checked the CORS configuration from api end it is fine. The issue is on the Postman side.
Postman version: v8.7.0
I had the same error being reported for any forked or created Postman requests:
Error: Invalid character in header content ["Host"]
The request URL was using a global parameter:
{{BaseUri}}/some/sort/of/resource
In the console logs the following was reported (URLs redacted):
Request Headers
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Cache-Control: no-cache
Postman-Token: 9d14e81d-1e21-44a2-93ed-2758f0ad24fa
Host: my.url.co.uk↵
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Note the ↵ character at the end of the Host Header.
The global BaseUri parameter did not appear to have a line break at the end of it. However, completely deleting said parameter and recreating it seems to have fixed the issue.
I also had same incident and I was able to find the error by exporting "My Workspace" content and open it from notepad++. Then change the encoding to "ANSI" from notepad++ (Encoding=> ANSI). You will notice special characters as below,
This can happen when you copy the url and paste in Postman and then try to edit it.
If you are getting this URL from someone what you can ask is to provide exported json file from postman. Then import it to your workspace.
I thought the issue was in a variable I was using because the error was telling me there's an invalid character in my host https://localhost:4431 which is exactly the value of my variable.
I figured out the invalid character was actually not in my variable but in the rest of the URL in my request.
Turns out, when copying endpoint names from the Swagger of my API, I was also copying an invisible character %E2%80%8B. I saw it when checking the API's console RequestPath:/%E2%80%8BmyEndpoint
Removing this invisible character solved the issue
Taken from question comments. by Fidel Garcia
I created the request again via Add Request menu and it works. I'm not sure if it is a problem with the update and old requests. The old one is still failing.
===================================================
It also worked for me. I created new request with the same parameters and it worked.
I created a requested with and wrote the parameters in headers. After wrong requested I changed it to correct one (post request and parameters in body) and got the error. After creating new request with correct configuration (post request parameters in body) it worked correctly.
In my case:
I removed the authentication from Header then I re-enter the authentication credentials again.
In my case enter after param and path generate error. Exact reason could be found in postman console.
In my case this happened because I added an extra blanckspace at the end of an environment variable deffinition. That extra space was being taken into account in the route when making a request.
Be careful with those extra blanck spaces.
This is essentially a duplicate of Can not add account for custom Sonos service, but there's no accepted answer and I am not able to add a comment to ask if they ever resolved their issue.
I've inherited a project and am trying to add the development service. I've configured it via /customsd.htm, set the header policy to Session ID, have both secure and insecure endpoints working.
When I go to add the channel, I see the request for strings.xml. However, I never see any requests come in for getSessionId. Meanwhile the SONOS reports "Account Not Found. ***** server did not recognize your login information." I am able to make the request with SoapUI, and I get a valid response.
If it's worth mentioning, I am in SONOS' beta program and am on version 6.2, build 31926010 (Mac desktop app).
UPDATE:
While I'm not sure there's anything useful here, looking at logs at [deviceIP]:1400/support/aggregate, I see the following. Note that the redacted URL and IP do resolve. IP is for a loadbalancer, URL is behind it.
Feb 28 11:07:42 Sonos[84168] <![CDATA[<]]>Error<![CDATA[>]]>: (SCLib) dns(1): [redacted URL] -<![CDATA[>]]> [redacted IP]
Feb 28 11:07:42 Sonos[84168] <![CDATA[<]]>Error<![CDATA[>]]>: (SCLib) control_client(1): getSessionId failed, res = 1000, tvStart = 1456679262 s 250163 us, m_tvConnectDone = 1456679282 s 250162 us, m_tvDone = 1456679302 s 250162 us, tvNow = 1456679262 s 509982 us
Feb 28 11:07:42 Sonos[84168] <![CDATA[<]]>Error<![CDATA[>]]>: (SCLib) soap(1): - param username = [redacted username]
UPDATE #2:
I inspected the packets via Wireshark, and behavior of the production service and development one seem the same except that for the production service, the controller / my computer kicks off a POST request to the Sonos before the server hangs up. That process, outlined in red in the attached image, does not occur for the customsd service.
I also experimented with using the production service endpoints in the customsd configuration, but that request failed in the same manner. FWIW, all ssl_validation tests pass just fine, as do various content tests.
For want of a trailing slash...
I finally sorted this, and it was incredibly minor. In the customsd configuration, my endpoint urls did not have a trailing slash. Adding them made it all work.
I finally realized this when I decrypted the calls the controller made to the server for both the production service and my customsd service registered to the production endpoints. The call made by the customsd service was getting back the load balancer's 400 error. The only difference between the calls were the headers, specifically:
Production service: POST / HTTP/1.1
Dev service : POST HTTP/1.1
LB properly said the call was invalid. Adding the trailing slash made it all work.
For what it's worth, I'm pretty sure the URLs without trailing slashes worked both via curl and, I believe, via SoapUI. The Sonos controller requires them, however.
Using WSO2 API Manager 1.3.1. Trying to use the API Manager to proxy to a REST service. I have set up the service in API Mgr and can successfully post and get responses, typically json, though some are text.
However, when I try to GET a resource that returns binary content (a zip "file", content-type:application/octet-stream), the API Manager does not seem to respond and I can see an error in the console window (i'm running wso2server.bat in console):
[2013-07-03 11:52:05,048] WARN - SourceHandler Connection time out
while writing the response: 173.21.1.22:1268->173.21.1.22:8280
I have an HTTPModule on my internal service and it seems to be responding with the appropriate content (I can see the GET and response data logged). I can also call to the internal service directly and get a response, so that end of things seems OK. But going through the API Manager seems to fail.
I found information on enabling other content-types:
WSO2 API Manager - Publishing API with non-XML response
http://wso2.com/library/articles/binary-relay-efficient-way-pass-both-xml-non-xml-content-through-apache-synapse
Using that information I tried to enable the application/octet-stream for messageFormatter and messageBuilder using the binary relay and it did not help (or seem to make a difference). I have even disabled all other content-types and use the binary relay for all content-types and it does not help.
Currently, I'm running with just the following in both axis2.xml and axis2_client.xml (in their appropriate sections):
<messageBuilder contentType=".*" class="org.wso2.carbon.relay.BinaryRelayBuilder"/
<messageFormatter contentType=".*" class="org.wso2.carbon.relay.ExpandingMessageFormatter"/>
I still get my json and text responses, but WSO2 times out getting the zip content. I saw the JIRA referenced in axis2.xml about enabling the ".*" relay, but as the other requests seem to work, I'm not sure it's an issue for me. I did try adding
'format="rest"' to the API definition, but it seemed to break all operations even the ones that worked prior so I've pulled it back out.
Any ideas on what is happening or how to dig in and debug this will help. Thanks!
After working with this for much too long, it turns out that my WSO2 configuration was correct, using the Message Relay and BinaryRelayBuilder, etc. While my REST service could reply immediately, I was setting a HTTP header that I assume WSO2 does not like, because when i removed it WSO2 would reply at an expected rate (instantly).
I was setting the header:
Transfer-Encoding: binary
When I removed that header from my service reply, then WSO2 operated as expected. I don't know if that's a "bug" in WSO2 or if I was implementing incorrectly, but I do have what seems like a "workaround" by omitting that header from my service response.
I am attempting to build a c# module to connect to the Twitter streaming API using OAuth (now the only option). I have got to the point where my module will successfully access api urls using GET, but everything I do to try and make a POST request fails with a 401.
I have checked my signature is correct by using the OAuth Tool tab on the page for my Twitter App, and fixing the values for nonce and timestamp in my code. I have curl for Windows set up and can verify that it works with the sample curl script generated by the OAuth tool (by the way, this needs some correction of the quotes to make it work for curl in Windows Cmd. Get rid of single quotes on values that don't need them, use double quotes on anything that needs to be quoted, and on the Authorization header, use double quotes and escape double quotes within the header with a backslash).
I have even gone to the length of running curl in trace mode and outputting the bytes I send in the post body from my c# code and I can verify that they are the same.
I am trying to access 'https://stream.twitter.com/1.1/statuses/filter.json' using 'track=twitter' as the post body. The headers are:
Accept: */*
User-Agent: curl/7.21.7(amd64-pc-win32) libcurl/7.21.7 OpenSSL/0.9.8rzlib/1.2.5
Content-Type: application/x-www-form-urlencoded
Host: stream.twitter.com
Content-Length: 13
Connection: Keep-Alive
Authorization: OAuth <the oauth stuff>
I can't inspect the packets being sent to check on the wire that the requests are identical as they are of course SSL encoded.
Any ideas?
I eventually got this to work. Things that might help you if you have this kind of problem which I discovered:
I had a problem initially because I created a new nonce every time the bit of code was accessed. This meant the nonce which was used in generating the signature key was different from the one in the header. Obviously fail.
I then ran into the above problem. What it was is that I was adding the OAuth header to my request AFTER I sent the request body. For some reason it seems to send the request as soon as you write to the request stream for a POST.
It was very useful in finding 2. that I found out how to use Fiddler to trace web requests from code. Essentially all you need to do is add this to your web.config:
<system.net>
<defaultProxy>
<proxy proxyaddress="http://127.0.0.1:8888" />
</defaultProxy>
</system.net>
As soon as I tried to read the HTTPS request, Fiddler prompted me to install bits so it could decrypt the request, which I did and then I could see the exact request going down the wire. I could compare this with what cURL was doing using
-x 127.0.0.1:8888
option.
However I then ran into a problem with my request timing out. Which bizarrely enough was caused by the fact that Fiddler was proxying the response. Once I took the above out of my web.config again it all worked. Halleluja!
I am trying to test a servlet I wrote that processes a payapal IPN notification (my servlet is very similar to this example) - thing is even after enabling all the settings in the test account I am using the IPN notification is not firing at all.
I then found out that apparently in the sandbox the only way to test IPN is through the IPN simulator. I am trying to use it but I am getting:
IPN delivery failed. HTTP error code 405: Method Not Allowed
Does anyone have the slightest clue?
Also I am seeking ANY advice for testing IPN handlers in a straightforward manner since the IPN simulator kind of sucks (any option you pick it resets all the fields and so forth).
Any help appreciated!
I would check to make sure that your web server is allowing POST requests on your IPN handler URL. In this example, I used the PHP version of the example on the page you linked, and placed the script at /ipn.php.
I then telnet to my server. (replace with your server address)
$ telnet myserver.com 80
Trying myserver.com...
Connected to myserver.com.
Escape character is '^]'.
Paste the following into your telnet session. (replace ipn.php and myserver.com). Add a blank line after the last command.
POST /ipn.php HTTP/1.1
Host: myserver.com
Connection: close
Content-Type: application/x-www-form-urlencoded
Content-Length: 0
HTTP/1.1 200 OK
If you don't see a 200 Status, it means your application is not handling POST requests properly, which is a probable cause of the 405 Error.
You should make sure that you implement a doPost() method in your servlet, as well as a doGet().
If you are able to get the requests working from the IPN simulator, and are ready to move on to sandbox testing, make sure that you have the correct Notify URL and that IPN is enabled under the sandbox seller's profile.
Also, make sure that you IPN Handler is logging INVALID requests as well, so that you know if the request was even initiated.
Finally, make sure that the IPN verification URL is set to https://www.sandbox.paypal.com/cgi-bin/webscr in your servlet. (the URL in the example you posted is https://www.paypal.com/cgi-bin/webscr)