HTTP POST/PUT files with headers that are guaranteed to be returned on GET operations - apache

I would like to POST/PUT files on a web server with certain HTTP headers, and have the web server return those headers on each GET requests for those files.
Is there a way included inside the protocol to guarantee this, or is this behavior dependent on having a proprietary web server?
Thanks.

It depends on the web server. Most won't store data from request header fields.

Related

what is the difference between web service and http and api?

i am taking a course in web data so i understand that when we want to retrive a webpage on a browser we do a request response cycle using a communication protocol like http or https and a web service is a piece of software which i dont know where it is stored or how it is accessed so we can make two applications from different architectures communicate using a serialization language like XML or JSON i dont know what is the difference between a web service and http they are both a way to connect 2 different computers together and what confused me the more is api which according to the research i did is something used to access web services.
Let's begin with defining all the terms in your question since it's a bit all over the place.
HTTP (Hypertext-Transport Protocol): Allows you to transfer data over the web. Your browser will perform a request using HTTP to your web service.
Service: Any software that performs a specific task. We are interested in a web service, which is typically invoked via HTTP, however this can be anything else such as a Linux signal.
For now, let's assume it listens on HTTP.
API (Application Programming Interface): An interface by which all clients of your software have to abide by to use it. For example, in our web service, we can dictate an API so requests follow some convention.
Let's put it all together now.
You're making a website that wants to calculate the sum of two numbers. First, users will go to http://yoursite.com, and then the browser will always do an HTTP request to the domain yoursite.com on port 80. This will hit either your hosting site or some backend server.
Here you have the option if you're using something like GitHub pages to serve static content or you have some server (i.e., serverd) that will load a file and serve it.
So now the web-browser did an HTTP request and your webpage should load with an index.html. The user can now click on buttons, and everything looks good until they press Calculate -- what happens now?
We want to offload the computation to our backend. We perform an HTTP request to our backend server. We can define an API, that is in our case an endpoint, so that the HTTP request can hit it and it'll return the sum of the two numbers.
How do we return the result? We need to represent the data somehow, and this can be done through a body payload that is encoded as either JSON or XML. Again, this is a serialization format and can encode it in various different ways. JSON is nice because you can parse it easily with JavaScript on the client side.
Great -- so now we got an entire site and it works! Now we can do an HTTP request from our browser straight to the backend according to our setup endpoint and it should fulfill our request. Notice how now we're using the API from the backend server from within our site.
Other keywords you can may run into: CORS, AJAX, Apache Server; good luck!

Dynamic Auth / Headers in Mulesoft HTTP connector

I frequently encounter scenarios where I would like to dynamically configure headers on outbound HTTP/HTTPS requests within a Mulesoft flow and I cannot figure out a nice way to do this.
For example, I have tried, in an HTTP connector, doing the following in the associated configuration:
And I have tried using the HTTP Connector's HOST override setting to pass in a user/password (even statically):
Is there a better way here? It's gotta be a pretty common issue that one flow may need to broker requests with different credentials based on who is calling...
AFAIK, the dynamic values can be set as flow variables, which in turn to be configured in HTTP headers/path.
Then the problem is changed to how make flow variable dynamic, it depends on the situation you have, basically set variable with a value in runtime (retrieved from request payload, incoming request's header etc.), then it will act as dynamically set in your HTTP requester followed.

MarkLogic - CORS with REST API

I have a MarkLogic-based web application which pulls data from two sources, a document store and a triple store both hosted on my MarkLogic server. The app uses MarkLogic's built-in REST APIs to access these data stores. The document store's REST API is running on port 8003 and the triple store's REST API is on port 8007. The application is hosted on the modules database of the document store. Now, when I make a REST API call to pull triple data, I get an exception saying the 'access-control-allow-origin' header has not been set at the server side. I would like to know how I can set it up so I get 'access-control-allow-origin' as '*' in all responses from the REST API. I have read the documentation on xdmp:add-response-headers, but I'm not able to figure out where I can use it correctly. All help is much appreciated!
Thanks!
Why not keep the documents and triples in the same database? The ability to do that is one of MarkLogic's strengths.
The built-in REST API endpoints don't seem to support any mechanism for adding arbitrary response headers. However you should be able to add your own headers when writing a REST extension: https://docs.marklogic.com/guide/rest-dev/extensions
For the built-in endpoints you might consider routing requests through another app-server layer, or a transparent reverse proxy. Either way the goal would be to re-route requests so that the browser thinks both REST API instances are on the same server.

Under what conditions are HTTP request headers removed by proxies?

I'm looking at various methods of RESTfully versioning APIs, and there are three major contenders. I believe I've all but settled on using X-API-Version. Putting that debate aside, one of the arguments against using that header, and custom headers in general, is that you can't control when headers are manipulated by proxy servers. I'm curious about what real-world examples there are of this, when it happens on the internet at large, or when it might be used on an intranet or server cluster, or when it might occur in any other situation.
The Guidelines for Web Content Transformation Proxies 1.0 is pretty much the definitive guide to understanding and predicting standards-compliant proxy server behavior. In terms of your question, the Proxy Forwarding of Request portion of the document might be especially helpful.
Each proxy software package and their individual configurations will be vary but, HTTP proxies are generally expected to follow the W3C Guidelines. Here are some highlights.
4.1 Proxy Forwarding of Request:
Other than to convert between HEAD and GET proxies must not alter request methods.
If the request contains a Cache-Control: no-transform directive, proxies must not alter the request other than to comply with transparent HTTP behavior defined in [RFC 2616 HTTP] sections section 14.9.5 and section 13.5.2 and to add header fields as described in 4.1.6 Additional HTTP Header Fields below.
4.1.3 Treatment of Requesters that are not Web browsers
Before altering aspects of HTTP requests and responses proxies need to take account of the fact that HTTP is used as a transport mechanism for many applications other than "Traditional Browsing". Increasingly browser based applications involve exchanges of data using XMLHttpRequest (see 4.2.8 Proxy Decision to Transform) and alteration of such exchanges is likely to cause misoperation.
4.1.5 Alteration of HTTP Header Field Values
Other than the modifications required by [RFC 2616 HTTP] proxies should not modify the values of header fields other than the User-Agent, Accept, Accept-Charset, Accept-Encoding, and Accept-Language header fields and must not delete header fields (see 4.1.5.5 Original Header Fields).
Other than to comply with transparent HTTP operation, proxies should not modify any request header fields unless one of the following applies:
the user would be prohibited from accessing content as a result of the server responding that the request is "unacceptable" (see 4.2.4 Server Rejection of HTTP Request);
the user has specifically requested a restructured desktop experience (see 4.1.5.3 User Selection of Restructured Experience);
the request is part of a sequence of requests comprising either included resources or linked resources on the same Web site (see 4.1.5.4 Sequence of Requests).
These circumstances are detailed in the following sections.
Note:
It is emphasized that requests must not be altered in the presence of Cache-Control: no-transform as described under 4.1.2 no-transform directive in Request.
The URI referred to in the request plays no part in determining whether or not to alter HTTP request header field values. In particular the patterns mentioned in 4.2.8 Proxy Decision to Transform are not material.
4.1.6 Additional HTTP Header Fields
Irrespective of the presence of a no-transform directive:
proxies should add the IP address of the initiator of the request to the end of a comma separated list in an X-Forwarded-For HTTP header field;
proxies must (in accordance with RFC 2616) include a Via HTTP header field (see 4.1.6.1 Proxy Treatment of Via Header Field).
There is also lots of information regarding the alteration of response headers and being able to detect those changes.
As for web service REST API versioning, there is a very lucid and useful SO thread at Best practices for API versioning? that should provide a wealth of helpful insight.
I hope all of this helps. Take care.
This isn't an answer per se, but rather a mention of real-world scenario.
My current environment uses a mixed CAS/AD solution in order to allow SSO across several different platforms (classic ASP, ASP.NET, J2EE, you name it).
Recently we identified some issues - part of the solution involves aggregating Auth tokens to HTTP headers whenever necessary to propagate credentials. One specific solution, making considerable heavy usage of cookies, was chained with an nginx implementation, whose HTTP header limit was set to 4KiB. If the cookie payload went over 2KiB, it would start leaking out headers.
Consequently, applications that had some sort of state/scope control being coordinated via HTTP headers (session cookies included) suddenly started behaving erratically.
On an interesting, related note, REST services using URL versioning (http://server/api/vX.X/resource, for example) were unaffected.

Injecting data caching and other effects into the WCF pipeline

I have a service that always returns the same results for a given parameter. So naturally I would like to cache those results on the client.
Is there a way to introduce caching and other effect inside the WCF pipeline? Perhaps a custom binding class that could site between the client and the actual HTTP binding.
EDIT:
Just to be clear, I'm not talking about HTTP caching. The endpoint may not necessarily be HTTP and I am looking at far more effects than just caching. For example, one effect I need is to prevent multiple calls with the same parameters.
The WCF service can use Cache-Control directives in the HTTP header to say the client how it should use the client side cache. There are many options, which are the part of HTTP protocol. So you can for example define how long the client can just get the data from the local cache instead of making requests to the server. All clients implemented HTTP, like all web browsers, will follow the instructions. If your client use ajax requests to the WCF server, then the corresponding ajax call just return the data from the local cache.
Moreover one can implement many interesting caching scenarios. For example if one set "Cache-Control" to "max-age=0" (see here an example), then the client will always make revalidation of the cache by the server. Typically the server send so named "ETag" in the header together with the data. The "ETag" represent the MD5 hash or any other free information which will be changed if the data are changed. The client send automatically the "ETag", received previously from the server, together inside the header of the GET request to the server. The server can answer with the special response HTTP/1.1 304 Not Modified (instead of the typical HTTP/1.1 200 OK response) and with the body having no data. In the case the client will safe to get the data from the local cache.
I use "Cache-Control:max-age=0" additionally with Cache-Control: private which switch off caching the data on the proxy and declare that the data could be cached, but not shared with another users.
If you want read more about caching control with respect of HTTP headers I'll recommend you to read the following Caching Tutorial.
UPDATED: If you want implement some general purpouse caching you can use Microsoft Enterprise Library which contains Caching Application Block. The Microsoft Enterprise Library are published on the CodePlex with the source code. As an alternative in .NET 4.0 you can use System.Runtime.Caching. It can be used not only in ASP.NET (see here)
I continue recommend you to use HTTP binding with HTTP caching if it only possible in your environment. In the way you could save many time of development and receive at the end more simple, scalable and effective application. Because HTTP is so important, one implemened already so much useful things which you can use out-of-the-box. Caching is oly one from the features.