The follow code (running in ASP.Net 2.0) displays the contents of the requested URL twice. I only want it to display the contents of the requested URL once. I can't figure out what I'm doing wrong. The URL requested is returning XML and if I visit the URL directly, it works fine.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
byte[] postDataBytes = Encoding.UTF8.GetBytes(postData);
request.Method = "POST";
request.ContentType = "application/xml";
request.ContentLength = postDataBytes.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(postDataBytes, 0, postDataBytes.Length);
requestStream.Close();
// get response and write to console
response = (HttpWebResponse) request.GetResponse();
StreamReader responseReader = new StreamReader(response.GetResponseStream(), Encoding.UTF8);
try {
Response.Write(responseReader.ReadToEnd());
}
finally {
responseReader.Close();
}
response.Close();
Your code looks good, so I don't think the problem is there... but what I would suggest is the following:
1) Maybe the error is on the URL's other end... so try hitting Google and see if the returned content is good or not.
2) Put a breakpoint at the "responseReader.ReadToEnd()" spot, and see if what's coming out of there is good.
3) If this code above is in an ASPX page... are you making sure to call "Response.End();" after you're last line of code? (not "resposne.close()", but "Response.End()").
I found the problem. It's not with the above code at all, but with the page being called. The page I was calling was inherited from a class whose Page_OnInit method contained the following line: "MyBase.OnLoad(e)", which caused the Page_OnLoad method to be executed twice. Obviously, it should have been MyBase.OnInit(e) instead. I didn't catch it because when I tested the page directly I had to temporarily remove the inheritance from the class because of some other code that would've have prevented me from testing the page directly.
I will now put on my "Dunce" hat and retreat to the corner for a time out. Thanks anyway for the help.
Related
I am trying to add an event from my local database to the Davical server (in fact, this should apply to any CalDav server, as long as it is compliant with the CalDav protocol)...
From what I could read here, I can send a PUT request to add events contained in a VCALENDAR collection... So here is what I try to do:
try {
// Create the HttpWebRequest object
HttpWebRequest Request = (HttpWebRequest)HttpWebRequest.Create("http://my_caldav_srv/davical.php/user/mycalendar");
// Add the network credentials to the request
Request.Credentials = new NetworkCredential(usr, pwd);
// Specify the method
Request.Method = "PUT";
// some headers - I MAY BE MISSING THINGS HERE???
Request.Headers.Add("Overwrite", "T");
// set the body of the request...
Request.ContentLength = bodyStr.Length;
Stream reqStream = Request.GetRequestStream();
// Write the string to the destination as a text file.
reqStream.Write( Encoding.UTF8.GetBytes(body), 0, body.Length);
// Set the content type header.
Request.ContentType = contentType.Trim();
// Send the method request and get the response from the server.
Response = (HttpWebResponse)Request.GetResponse();
}
catch (Exception e) {
throw new Exception("Caught error: " + e.Message, e);
}
The body I send is actually an emtpy calendar:
BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
PRODID:-//davical.org//NONSGML AWL Calendar//EN
X-WR-CALNAME:My Calendar
END:VCALENDAR
For a reason I cannot understand, the call with "PUT" returns an error (405) Method Not Allowed. The PUSH returns (500) Internal Server Error, but looking at the debug details, the reason is the same as for the PUT case...
In debugging on the server side, I found out that the reason is that in caldav-PUT-vcalendar.php, the following clause is violated:
$c->readonly_webdav_collections
Well, first, let me mention that with the SAME credentials entered in Lightning, I am able to add/remove events and, on the admin interface, I actually made sure to grant ALL rights to the user. So I'd be surprised it is due to that...
Any help would be most appreciated !
Kind regards,
Nik
OK, I got it....
The reason is that one must put the event to some EVENT adress....
I.e. the "url" is not the collection's address, but the EVENT's address...
So the same code using the following address works:
string url="http://my_server/caldav.php/username/calendarpath/_my_event_id.ics";
Does anybody know if it is possible to insert / delete multiple events at once ???
I have a self-hosted WCF REST/webHttpBinding-endpoint-bound service. I have a few streams of different content types that it serves. The content itself is delivered correctly, but any OutgoingResponse.ContentType setting seems to be ignored and instead delivered as "application/xml" every time.
Browsers seems to get over it for javascript and html (depending on how it's to be consumed), but not for css files which are interpreted more strictly. CSS files are how I realized the problem but it's a problem for all Streams. Chromebug and IE developer tools both show "application/xml" regardless of what I put in the serving code for a content type. I've also tried setting the content type header as a Header in OutgoingResponse but that makes no difference and it probably just a long way of doing what OutgoingResponse.ContentType does already.
[OperationBehavior]
System.IO.Stream IContentChannel.Code_js()
{
WebOperationContext.Current.OutgoingResponse.ContentType = "text/javascript;charset=utf-8";
var ms = new System.IO.MemoryStream();
using (var sw = new System.IO.StreamWriter(ms, Encoding.UTF8, 512, true))
{
sw.Write(Resources.code_js);
sw.Flush();
}
ms.Position = 0;
return ms;
}
This behavior is added:
var whb = new WebHttpBehavior
{
DefaultBodyStyle = System.ServiceModel.Web.WebMessageBodyStyle.WrappedRequest,
DefaultOutgoingRequestFormat = System.ServiceModel.Web.WebMessageFormat.Json,
DefaultOutgoingResponseFormat = System.ServiceModel.Web.WebMessageFormat.Json,
HelpEnabled = false
};
I've tried setting AutomaticFormatSelectionEnabled = true and false just in case because it came up in google searches on this issue, but that has no effect on this.
I'm finding enough articles that show Stream and ContentType working together to confuse the heck out of me as to why this isn't working. I believe that the Stream is only intended to be the body of the response, not the entire envelope.
My .svclog doesn't show anything interesting/relevant that I recognize.
============
I can confirm in Fiddler2 that the headers are being delivered as shown in the browser.
...
Content-Type: application/xml; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
...
Solved!
I had something like the following in a MessageInspector:
HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty();
responseProperty.Headers.Add("Access-Control-Allow-Origin", "*");
reply.Properties["httpResponse"] = responseProperty;
and this was overwriting the already-present HttpResponseMessageProperty in reply.Properties, including any contentType settings. Instead, I tryget the HttpResponseMessageProperty first and use the existing one if found.
I lucked out seeing that one.
I am new to Web Crawling, and I am using HttpWebRequest to crawl data from sites.
As of now I was successfully able to crawl and get data from my wordpress site. This data was a simple user profile data. (like name, email, AIM id etc...)
Now as an exercise I want to crawl wikipedia, where I will search using the value entered into textbox at my end and then crawl wikipedia with the search value and get the appropriate title(s) from the search.
Now I have the following doubts/difficulties.
Firstly, is this even possible ? I have heard that wiki has robot.txt setup to block this. Though I have heard this only from a friend and hence not sure.
I am using the same procedure I used earlier, but I am not getting the required results.
Thanks !
Update :
After some explanation and help from #svick, I tried the below code, but still not able to get any value (see last line of code, there I am expecting an html markup of the search result page)
string searchUrl = "http://en.wikipedia.org/w/index.php?search=Wikipedia&title=Special%3ASearch";
var postData = new StringBuilder();
postData.Append("search=" + model.Query);
postData.Append("&");
postData.Append("title" + "Special:Search");
byte[] data2 = Crawler.GetEncodedData(postData.ToString());
var webRequest = (HttpWebRequest)WebRequest.Create(searchUrl);
webRequest.Method = "POST";
webRequest.UserAgent = "Crawling HW (http://yassershaikh.com/contact-me/)";
webRequest.AllowAutoRedirect = false;
ServicePointManager.Expect100Continue = false;
Stream requestStream = webRequest.GetRequestStream();
requestStream.Write(data2, 0, data2.Length);
requestStream.Close();
var responseCsv = (HttpWebResponse)webRequest.GetResponse();
Stream response = responseCsv.GetResponseStream();
// Todo Parsing
var streamReader = new StreamReader(response);
string val = streamReader.ReadToEnd();
// val is empty !! <-- this is my problem !
and here is my GetEncodedData method defination.
public static byte[] GetEncodedData(string postData)
{
var encoding = new ASCIIEncoding();
byte[] data = encoding.GetBytes(postData);
return data;
}
Pls help me on this.
You probably don't need to use HttpWebRequest. Using WebClient (or HttpClient if you're on .Net 4.5) will be much easier for you.
robots.txt doesn't actually block anything. If something doesn't support it (and .Net doesn't support it), it can access anything.
Wikipedia does block requests that don't have their User-Agent header set. And you should use an informative User-Agent string with your contact information.
A better way to access Wikipedia is to use its API, rather than scraping. This way, you will get an answer that's specifically meant to be read by a custom applications, formatted as XML or JSON. There are also dumps containing all information from Wikipedia available for download.
EDIT: The problem with your newly posted code is that your query returns a 302 Moved Temporarily response to the searched article, if it exists. Either remove the line that forbids AllowAutoRedirect, or add &fulltext=Search to your query, which will mean you won't get redirected.
To get around twitters streaming API not having a crossdomain file to access it from client side( in this case Silverlight) I have made a Generic Handler file in a web project which basically downloads the stream from twitter and as it reads it, writes it to the client.
Here is the handler code:
context.Response.Buffer = false;
context.Response.ContentType = "text/plain";
WebRequest request = WebRequest.Create("http://stream.twitter.com/1/statuses/filter.json?locations=-180,-90,180,90");
request.Credentials = new NetworkCredential("username", "password");
StreamReader responseStream = new StreamReader(request.GetResponse().GetResponseStream(), Encoding.GetEncoding("utf-8"));
while (!responseStream.EndOfStream)
{
string line = "(~!-/" + responseStream.ReadLine() + "~!-/)";
context.Response.BinaryWrite((Encoding.UTF8.GetBytes(line)));}
And this does work, but the problem is that once the client disconnects the handler just carry's on downloading. So how do I tell if the client is still busy receiving the request and if not, end the while loop?
Also, my second problem is that on the client side doing a "ReadLine()" does nothing, presumably because it is counting the entire stream as one line so never gets the full response. To work around that I read it byte by byte and when it sees "(~!-/" around something it know that is one line. VERY hacky, I know.
Thanks!
Found the answer!
while (context.Response.IsClientConnected)
:)
Our c#.net software connects to an online app to deal with accounts and a shop. It does this using HttpWebRequest and HttpWebResponse.
An example of this interaction, and one area where the exception in the title has come from is:
var request = HttpWebRequest.Create(onlineApp + string.Format("isvalid.ashx?username={0}&password={1}", HttpUtility.UrlEncode(username), HttpUtility.UrlEncode(password))) as HttpWebRequest;
request.Method = "GET";
using (var response = request.GetResponse() as HttpWebResponse)
using (var ms = new MemoryStream())
{
var responseStream = response.GetResponseStream();
byte[] buffer = new byte[4096];
int read;
do
{
read = responseStream.Read(buffer, 0, buffer.Length);
ms.Write(buffer, 0, read);
} while (read > 0);
ms.Position = 0;
return Convert.ToBoolean(Encoding.ASCII.GetString(ms.ToArray()));
}
The online app will respond either 'true' or 'false'. In all our testing it gets one of these values, but for a couple of customers (out of hundreds) we get this exception System.FormatException: String was not recognized as a valid Boolean Which sounds like the response is being garbled by something. If we ask them to go to the online app in their web browser, they see the correct response. The clients are usually on school networks which can be fairly restrictive and often under proxy servers, but most cope fine once they've put the proxy details in or added a firewall exception. Is there something that could be messing up the response from the server, or is something wrong with our code?
Indeed, it's possible that the return result is somehow different.
Is there any particular reason you are doing the reasonably elaborate method of reading the repsonse there? Why not:
string data;
using(HttpWebResponse response = request.GetResponse() as HttpWebResponse){
StreamReader str = new StreamReader(response.GetResponseStream());
data = str.ReadToEnd();
str.Close();
}
string cleanResult = data.Trim().ToLower();
// log this
return Convert.ToBoolean(cleanResult);
First thing to note is I would definitely use something like:
bool myBool = false;
Boolean.TryParse(Encoding.ASCII.GetString(ms.ToArray()), myBool);
return myBool;
It's not some localisation issue is it? It's expecting the Swahili version of 'true', and getting confused. Are all the sites in one country, with the same language, etc?
I'd add logging, as suggested by others, and see what results you're seeing.
I'd also lean towards changing the code as silky suggested, though with a few further changes from me (code 'smell' issues, IMO); Use using around the stream reader, as well as the response.
Also, I don't think the use of as is appropriate in this instance. If the Response can't be cast to HttpWebResponse (which, admittedly is unlikely, but still) you'll get a NullRef exception on the response.GetResponseStream() bit which is both a vague error, and you've lost the original line number. Using (HttpWebResponse)request.GetResponse() will give you a more correct error, and the correct line number of the actual error.