Webdav Lock request return "405 Method Not allowed" in asp .net core 6 hosted in IIS - asp.net-core

I actually a migrate an old asp .net framework 4 web app to asp .net core 6.
This webapp was serving docx et xlsx files through IIS and webdav to allow end users edit directly files on the server
On the old app, the config was like that
a virtual directory on IIS to associate a virtual Path Webdav to a physical path
an IHttpModule which allow me to intercept all request on the server and add some authentication when the request point out /webdav (webdav don't support anonymous authentication)
HttpContext.Request.Headers.Remove("Authorization");
HttpContext.Request.Headers.Add("Authorization", "Basic " + base64string);
On the new app
the virtual directory is manage directly on the code (Startup)
var lOptions = new FileServerOptions
{
FileProvider = new PhysicalFileProvider(Sys.Web.AppliIs.Path_Webdav),
RequestPath = new PathString("/" + Sys.Web.AppliIs.WEBDAV_FOLDER),
EnableDirectoryBrowsing = false,
};
app.UseFileServer(lOptions);
i intercept the request to add my authentication in a custom middleware (called before the code above)
app.Use(async (context, next) =>
{
AppliGeckosSL.WriteLogDebug($"intercept {context.Request.Path}, Method : {context.Request.Method}", null);//just to log everything which arrived
var lFileInfo = lHostEnv.ContentRootFileProvider.GetFileInfo(context.Request.Path);
if (lFileInfo != null)
{
WebdavFileManager.HandleRequest(context, lFileInfo.PhysicalPath);
}
// Call the next delegate/middleware in the pipeline.
await next(context);
});
Whe i test the new code (opening a file stored in the server with word), it failed.
Whe i inspect the logs, i notice thow things :
On the IIS logs, i see my different request called by word
2022-03-23 11:20:31 ::1 OPTIONS /Webdav/ - 7520 - ::1 Microsoft+Office+Protocol+Discovery 200 0 0 47
2022-03-23 11:20:31 ::1 HEAD /Webdav/BeWise.docx - 7520 - ::1 Microsoft+Office+Existence+Discovery 200 0 0 4
2022-03-23 11:20:31 ::1 OPTIONS /Webdav/ - 7520 - ::1 Microsoft+Office+Existence+Discovery 200 0 0 5
2022-03-23 11:20:31 ::1 LOCK /Webdav/BeWise.docx - 7520 - ::1 Microsoft+Office+Core+Storage+Infrastructure/1.0 405 0 0 4
2022-03-23 11:20:31 ::1 GET /Webdav/BeWise.docx - 7520 - ::1 Microsoft+Office+Core+Storage+Infrastructure/1.0 404 0 3 2
we can see the Lock method finished in 405
We can also see the get finishing in 404 which i can't understand because the HEAD finished on 200 on the same file
The log of my middleware give me only this
23/03 12:20:32:023 [FW] intercept /Webdav/BeWise.docx, Method : HEAD
23/03 12:20:32:055 [FW] intercept /Webdav/, Method : OPTIONS
so we see the LOCK and the GET are not handle by my server
I see many solutions on this problem on this forum and others which recommand to disable webdav, solution which dont fit me because i want to use webdav
There is not a lot of documentations about .net core and webdav, im not even sure its supported.
I try to remove the virtual directory by code and set a virtual directory through IIS like the old app but still not working, in this case the lock not finish in 405 but in 401. I notice my middleware in not called, so i cant add my authentication. I suppose with this option we don't go through the asp .net core pipeline.
What do you think ? any suggestion on that ?
Thanks for your help !

I had some answers from microsoft, the IIS webdav module is no longer supported with .net core (they will update the docs for that because it was not clearly said). So there is no way i will be able to achieve what i want.
My solutions now :
implement myself the webdav protocol
buy a licence to IT HIT WEBDAV Server Engine
thanks anyway for your answers, the subject is closed

Related

Cannot debug in Visual Studio after changing the port number?

I added the line .UseUrls("http://*:5000") to enable clients from other hosts accessing the web api.
public static void Main(string[] args)
{
var host = new WebHostBuilder()
.UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.UseUrls("http://*:5000") // Added
.Build();
host.Run();
}
However, using browser to access localhost:5000/api/Test got the error of HTTP/1.1 400 Bad Request? Should the .UseUrls() be only compiled for production?
HTTP/1.1 400 Bad Request
Date: Mon, 08 Aug 2016 21:42:30 GMT
Content-Length: 0
Server: Kestrel
The following messages are copied from Visual studio Output window when testing.
Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request starting HTTP/1.1 GET http://localhost:5000/api/Test
Microsoft.AspNetCore.Server.IISIntegration.IISMiddleware:Error: 'MS-ASPNETCORE-TOKEN' does not match the expected pairing token '9bca37f2-7eda-4517-9f8f-60b6cc05cf01', request rejected.
Microsoft.AspNetCore.Hosting.Internal.WebHost:Information: Request finished in 8.5976ms 400
You should call first .UseUrls() and/or .UseConfig() and then .UseIISIntegration().
When running ok under IIS/IISExpress, you end up with 2 processes. IIS listening on the desired port and Kestrel on another one. Your requests should go to IIS and then it is forwarded to Kestrel (with the MS-ASPNETCORE-TOKEN).
The call to .UseIISIntegration() hides this mapping. It actually changes the port in your app and sets IIS on the desired port. But it breaks if you call both methods in incorrect order.
You are getting this error message because Kestrel expected to run behind IIS, and received a direct request. And it noticed that because IIS was not there to inject the MS-ASPNETCORE-TOKEN header.
This issue documents the issue and may solve it in future releases.
Another way to solve it.
Since the error because UseKestrel() and UseIISIntegration(), you can try running debug not using IIS/IIS Express, but choose Kestrel server, it will avoid the error happen.
You can check Properties\launchSettings.json to find out another debug option.

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

Failure to login to an Orbeon Forms application when using HTTPS

I have an Orbeon Forms v3.7.1 based application that fails when I try to login using HTTPS instead of HTTP.
I am using Apache as a proxy server connecting requests to a an application running on WebLogic 12.1.3. For various reasons, we recently had to migrate the Apache plugin module from mod_proxy to mod_weblogic.
For both modules, we have configured the module to map the incoming HTTPS requests to HTTP going to the WebLogic server. The main difference we noticed that seems to be causing the error is that the Referer header changed. When using the mod_proxy module, the Referer is listed as the Apache server using HTTP. When using the mod_weblogic module, the Referer is listed as the incoming Apache URL using HTTPS.
When using the mod_weblogic module, we get the following error in the Orbeon log when we try to login to the application. It seems to be failing while parsing the URL. Any ideas how to fix this problem?
Here is the exception listed in the orbeon log file:
2015-03-19 22:28:40,340 ERROR ProcessorService - Exception at line 20, column 46 of https://baseqa20151.delphi-tech.com:443/wl1213-test/baseqa20151/oasis2Portal/owsPortal/phs/get-navigation
org.orbeon.oxf.common.ValidationException: line 20, column 46 of https://baseqa20151.delphi-tech.com:443/wl1213-test/baseqa20151/oasis2Portal/owsPortal/phs/get-navigation: Fatal error: The entity name must immediately follow the '&' in the entity reference.
https://baseqa20151.delphi-tech.com:443/wl1213-test/baseqa20151/oasis2Portal/owsPortal/phs/get-navigation, line 20, column 46: Fatal error: The entity name must immediately follow the '&' in the entity reference.
at org.orbeon.oxf.xml.XMLUtils$ErrorHandler.fatalError(XMLUtils.java:306)
at orbeon.apache.xerces.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:178)
at orbeon.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:351)
at orbeon.apache.xerces.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:281)
at orbeon.apache.xerces.impl.XMLScanner.reportFatalError(XMLScanner.java:1459)
at orbeon.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanEntityReference(XMLDocumentFragmentScannerImpl.java:1252)
at orbeon.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(XMLDocumentFragmentScannerImpl.java:1717)
at orbeon.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:324)
at orbeon.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:845)
at orbeon.apache.xerces.parsers.XML11Configuration.parse(XML11Configuration.java:768)
at orbeon.apache.xerces.parsers.XMLParser.parse(XMLParser.java:108)
at orbeon.apache.xerces.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1201)
at org.orbeon.oxf.xml.XMLUtils.inputSourceToSAX(XMLUtils.java:350)
at org.orbeon.oxf.xml.XMLUtils.inputStreamToSAX(XMLUtils.java:335)
at org.orbeon.oxf.processor.URIProcessorOutputImpl.readURLToStateIfNeeded(URIProcessorOutputImpl.java:394)
at org.orbeon.oxf.xforms.processor.XFormsURIResolver.resolve(XFormsURIResolver.java:86)
at org.orbeon.oxf.xforms.processor.XFormsURIResolver.readURLAsDocument(XFormsURIResolver.java:117)
at org.orbeon.oxf.xforms.XFormsModel.performDefaultAction(XFormsModel.java:660)
at org.orbeon.oxf.xforms.XFormsContainingDocument.dispatchEvent(XFormsContainingDocument.java:1283)
at org.orbeon.oxf.xforms.XFormsContainer.initializeModels(XFormsContainer.java:173)
at org.orbeon.oxf.xforms.XFormsContainingDocument.initialize(XFormsContainingDocument.java:1525)
at org.orbeon.oxf.xforms.XFormsContainingDocument.<init>(XFormsContainingDocument.java:181)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML.createCacheContainingDocument(XFormsToXHTML.java:326)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML.access$200(XFormsToXHTML.java:50)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML$2.read(XFormsToXHTML.java:152)
at org.orbeon.oxf.processor.ProcessorImpl.readCacheInputAsObject(ProcessorImpl.java:453)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML.doIt(XFormsToXHTML.java:121)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML.access$000(XFormsToXHTML.java:50)
at org.orbeon.oxf.xforms.processor.XFormsToXHTML$1.readImpl(XFormsToXHTML.java:80)
at org.orbeon.oxf.processor.ProcessorImpl$6.read(ProcessorImpl.java:995)
at org.orbeon.oxf.processor.ProcessorImpl$ProcessorOutputImpl.read(ProcessorImpl.java:1178)
at org.orbeon.oxf.processor.ProcessorImpl.readInputAsSAX(ProcessorImpl.java:350)
at org.orbeon.oxf.processor.ProcessorImpl.readInputAsSAX(ProcessorImpl.java:355)
at org.orbeon.oxf.processor.xinclude.XIncludeProcessor.access$100(XIncludeProcessor.java:41)
...
and here is the log record from the HTTP access.log file for this request:
10.192.16.82 - baseqa20151x [19/Mar/2015:22:28:40 -0400] "GET /wl1213-test/baseqa20151/oasis2Portal/owsPortal/phs/billing-account-policy-inquiry-admin HTTP/1.1" 500 215530
Thanks for your help.

httpwebrequest 401 response

I have a program written in VB.net that interacts with a data services hosted on IIS. Authentication is handled through the users Active Directory credentials. At one of my customer sites, on exactly one (out of about 100) of the customer's workstations, requests to the data service fail with status of 401.
Some additional relevant information: the production IIS installation is split into two nodes. A load balancer directs traffic to the nodes. Also, the exact same request made with Internet Explorer from workstation in question does not fail.
I suspect that something is stripping the user's credentials out of the requests when I make the request through the VB code, but I am stumped as to what that could be.
Here is the VB code that I use to make the request:
Dim httpRequest As HttpWebRequest = Nothing
Dim httpResponse As HttpWebResponse = Nothing
httpRequest = WebRequest.Create("http://server/xyzportal/portal.php")
httpRequest.KeepAlive = False
httpRequest.UseDefaultCredentials = True
httpRequest.Method = "GET"
httpRequest.ContentLength = 0
httpRequest.Accept = "text/xml"
httpRequest.Timeout = 3000000
httpResponse = httpRequest.GetResponse
Any thoughts would be appreciated.
Additional information: here are the IIS log entries for a request that fails. Notice the 2nd entry does not include the Windows user name:
2014-11-11 22:20:42 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 2 5 0
2014-11-11 22:20:42 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 1 2148074248 0
Contrast that to the IIS entries for a request from a working machine. Notice the 2nd entry does include the Windows user name:
2014-11-11 22:56:40 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 2 5 0
2014-11-11 22:56:40 199.99.51.58 GET /xyzportal/portal.php - 80 MYDOMAIN\jreichert 199.99.50.128 - 200 0 0 93
The machine with the IP Address 199.99.50.128 is the load balancer.
I am logged in on the exact same domain and user on both machines.
You haven't said but if you are using a proxy then you haven't told the HttpRequest to use the AD user credentials for the proxy and so you are getting a 401 Unauthorised error, i.e. you are being refused access via the proxy. If so try this to explicitly tell it to...
HttpRequest.Proxy.Credentials = System.Net.CredentialCache.DefaultCredentials
I had exactly the same problem and it's solved it.
keepalive must be set to true. Setting keepalive = true fixes my problem. The following page explains the role of keepalive in the authentication handshake:
http://www.innovation.ch/personal/ronald/ntlm.html
I am still not sure why the request does not work on <1% of the workstations in my customer base when keepalive = false. All I know is setting keepalive = true makes the request work on 100% of the workstations.
More info: keepalive must be set to true when the authentication protocol is Kerebos. The request works if the authentication protocol is NTLM. I don't know why Kerebos gets used on only the two workstations where the request does not work.

Magento Soap Error - Premature end of data in tag definitions line 2

My client is using Unleashedsoftware.com to connect to a Magento Store. But it gives this error.
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Body>
<SOAP-ENV:Fault>
<faultcode>WSDL</faultcode>
<faultstring>
SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/' : Premature end of data in tag definitions line 2
</faultstring>
</SOAP-ENV:Fault>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
When browsing http://www.domain.com/index.php/api/v2_soap/index/ Firebug gives me “500 Internal Service Error”.
When I browse http://www.domain.com/index.php/api/v2_soap/index/wsdl/1/, I am getting valid XML data.
I checked the server log files and it seems like:
[Thu Aug 30 22:22:25 2012] [warn] [client 92.92.92.92] mod_fcgid: stderr: in /home/doaminuser/public_html/lib/Zend/Soap/Server.php on line 762
I been searching for couple of days now and today I tried to duplicate the entire site to another test server, and it seems to be working! So that seems to be a server issue.
Please, anybody got any idea what could be the issue?
Is there any better way of debugging this issue, any sample code or debugging tips.
Magento version is 1.6.2
Thank you.
There's lots of times where Magento's SOAP API fails due to problems your Magento server has communicating with itself.
That is, PHP's SOAP implementation requires that the SOAP server itself fetch the WSDL file via http, and a local network configuration issue gets in the way of Magento fetching it's own WSDL.
You can debug this by SSHing into your Magento server, and running the following command
curl -l 'http://www.example.com/index.php/api/v2_soap/index/wsdl/1/' > /tmp/wsdl.xml
and then examining the wsdl.xml file. Because you're performing this from your web-server, you may get different results than when you're performing it from your local browser.
I had a similar problem when calling the URL
http://www.store.com/index.php/api/v2_soap/?wsdl
After some time I received the message 500 - Internal Server Error and a Premature end of script headers message in the apache error log.
After a whole day of research I figured out, that the Timeout-Directive of the Apache module (configured in httpd.conf on a Linux environment) was set to "20" which caused the server to send the 500 error after 20 seconds. The problem is, that in my case the Magento system needs a longer time to "crawl" through all wsdl.xml files in order to build the WSDL-output (if you are using Magento SOAPv2).
Maybe you should check your Timeout Directive..hope that helps.
"I have memories of this. What worked for me was to put the hostname
in /etc/hosts on the server plus the www alias on 127.0.0.1 However,
in this instance the server was in the building rather than in some
ISP place and the LAN had Windows computers on it. Windows users had
downloaded lots of trojan-virus-porn things that were spending the
whole time spamming the network so the real problem was with the
Windows computers on the network, not with the server or with Magento.
After fdisking the PC's the problem was solved."
Thank You I've been struggling for 2 days with this on magento 1.6 and Windows Server 2008 adding this line to the hosts file (C:\Windows\System32\drivers\etc) solved the issue for me:
127.0.0.1 www.Domain.com
also remember to fix your magento soap (role) because the Roles Resources doesn't save in 1.6 unless you fix this file:
MagentoRoot\app\code\core\Mage\Adminhtml\Block\Api\Tab\Rolesedit.php
replace this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getPermission() == 'allow') {
with this:
if (array_key_exists(strtolower($item->getResource_id()), $resources) && $item->getApiPermission() == 'allow') {
In my case the issue was the Mod_Security rule "PHP Easter Egg Access" was enabled.
Rule ID: 380800
Once disabled, the api access worked.
An indicator was in the Apache log file:
Jun 19 09:15:52 httpd[1024961]: [error] [client xyz.xyz.xyz.xyz] ModSecurity: [file "/usr/local/apache/conf/modsec/99_asl_jitp.conf"] [line "116"] [id "380800"] [rev "1"] [msg "Atomicorp.com WAF Rules - Virtual Just In Time Patch: PHP Easter Egg Access"] [data "phpe9568f35-d428-11d2-a769-00aa001acf42"] [severity "CRITICAL"] Access denied with code 403 (phase 2). Pattern match "php(?:e9568f3[56]-d428-11d2-a769-00aa001acf42|b8b5f2a0-3c92-11d3-a3a9-4c7b08c10000)" at REQUEST_URI. [hostname "www.yoursever.com"]...
Magento version: 1.7.0.2
PHP version: 5.3.26
More information about the PHP Easter Egg Access rule:
http://www.atomicorp.com/forums/viewtopic.php?f=3&t=5057
http://www.0php.com/php_easter_egg.php
For those wanting a quick test script to replicate the issue (useful when trying to convince your hosting provider that it's a problem on their end), use:
<?php
$server = new SoapServer("http://<url to your magento shop>/index.php/api/v2_soap/index/wsdl/1/");
?>
This is the line in /lib/Zend/Soap/Server.php that triggers the error.
In my case if you browsed to:
http://< url to your magento shop >/index.php/api/v2_soap/index/wsdl/1/
the xml was fine, but if you ran the above php script on the server, the error was given.
This error most often appeared for me while omitting www for domain given in Magento SOAP url. Url has to match base url specified in the Magento config.