I have built a RESTful API in which I set the http_response_code() in some cases. I was using the XAMPP APACHE web server while developing the API on localhost.
The problem is that, now that I have deployed the API on our EC2 containerized DEV instance, which runs an nginx web server, all of my http_response_code() calls are not working and the API is always returning a "200 OK" status code, which is not wanted in cases like an error should be thrown when for example a user already exists in the system and a registration with the same email is forbidden.
Therefore, is there an equivalent of http_response_code() for nginx web servers?
Related
From what I've gathered, hosting a website on a server is allocating space for the files of your website whereas hosting an API is when the server continuously runs your API to receieve incoming web requests. Is it true that when hosting a website, the server is not running anything? Or is it also continuously running the website waiting for calls.
When you request a website, it makes a call through the network to a server machine that runs an application called a web server, for example Apache HTTP Server. Without that application, it would be unable to respond to your HTTP calls with web pages.
Web pages are just documents and resources, so they are not able to respond by themselves. An API, on the other hand, is usually a separate standalone application that can run on a different machine, and that is often called by web pages.
So the answer is no: the server has to run something even for a static website.
You need a server to "serve" your web-page. It does not matter if is a static page or a dynamic page (html or php). If you have a html page the server will read and sent it to the user(No processing done), if you have a dynamic page like php then the server will process the php code and generate a result usually an HTML page that its serve to the client.
If you have an API is the same approach as the dynamic page, you send parameters, server process them and then give you a result. In the case of an API it may require authentication and the result can be in HTML, XML, JSON for example.
I have .net core API inside the web app and that web app is backend pool for azure application gateway. while trying to access the web app got below error.
"502 - Web server received an invalid response while acting as a gateway or proxy server."
On app GW, health prob for that web app in unhealthy but while access the API as a https://abc.azurewebsites.net/api/values then it works.
When we deploy API in Web App Service then apiname.azurewebsites.net does not work give any probes to application gateway and treat unhealthy. API works like xxx.azurewebsites.net/api/values and Application Gateway also know this path. We have to put /api/values in override backend path of http settings. Same have to do in health probes.
Yes, you can first verify if the backend API could access directly without app gateway. Then this error may happen due to the following main reasons:
NSG, UDR or Custom DNS is blocking access to backend pool members.
Back-end VMs or instances of virtual machine scale set are not responding to the default health probe.
Invalid or improper configuration of custom health probes.
Azure Application Gateway's back-end pool is not configured or empty.
None of the VMs or instances in virtual machine scale set are healthy.
Request time-out or connectivity issues with user requests.
Generally, the Backend healthy status and details could point it out and show some clues. You could also verify all of the above reasons one by one according to this DOC.
I have an application that sends a request to a web service. Unfortunately the application is compiled and the link to the web service is embedded in the application as http. (Yes I know how dumb that is, I didn't write it)
Recently, the 3rd party company is no longer allowing http requests, everything must be https.
The application runs as a webapp on Tomcat. The server is a windows server.
Is there a way to intercept the call to this web service and force it to use https?
As you can't change the application's source code (as it is compiled), and you can't change the web service (as it is 3rd party) either, the only way to solve this problem is making a proxy between the application and web service. To do that, you need to (assume the proxy is running in localhost):
As the web service URL is embedded into the compiled application, in order to let application send HTTP request to our proxy, hosts mapping need to change (e.g. /etc/hosts) to override DNS. For example, if the HTTP request in application is GET http://example.com/api/sample, in /etc/hosts, example.com need to be mapped to 127.0.0.1.
Make a proxy web server in localhost and open the same port as the web service. This proxy is a very simple web server (any backend programming tech can do it), it is only responsible for request-forwarding. In this way, when application send HTTP request to example.com, it sends the request to the proxy server.
After receiving HTTP request from application, the proxy server extract the request URL/header/body and send HTTPS request to example.com's real IP address. Please note: in this HTTPS request, a header host whose value is example.com should be added. 3rd party web service may check this header.
After the real response is returned from example.com, proxy will return it to the application.
Of course, you can also use reverse engineering (Java Decompiler) to get the application's "source code", change the web service URL and then compile again to a webapp. However, as the application may need to update/upgrade and it is may not under your control, this reverse engineering method is not recommended.
You could use a proxy script. Write it in any server-side language you want, for example PHP, set the API URL to this script, the script does the https request for you and pass the results back to your app.
You could also use Apache itself as the proxy and use something like: Apache config: how to proxypass http requests to https
I followed the tutorial deploy and run Service Stack application on Ubuntu Linux and I got my API quickly up and running. So far it's all plain-text though. I'd like to secure the API with SSL, especially the service receiving username and password, but maybe everything.
I'm using the regular CredentialsAuthProvider together with JwtAuthProvider at the moment, if it's relevant. Using a 3rd party OAuth2/OpenID Connect would solve the login problem, but not securing the remaining contents.
Also wonder how to selectively choose which services require SSL.
The stack is: mono, nginx and HyperFastCGI (and C# ServiceStack)
You'll want to configure SSL on nginx, i.e. your external-facing Web Server. What ASP.NET Web framework you're using is irrelevant as SSL will be terminated at nginx and any downstream Web Applications will still be receiving plain-text requests.
If I have a WCF SOAP (C#) based web service running in my local IIS - and I make an ASP.net website, again running in my local IIS - will the javascript making HTTP request calls from my webpage be successful? Or do the same origin policy rules come into play here?
It depends on how your sites are configured in IIS. Check out this wikipedia article on same origin policy.
Let's say your WCF SOAP service is running on http://localhost/service/GetStuff.svc and your ASP.NET site is running on http://localhost/mysite/Default.aspx. According to the table in the same origin article, the call should succeed, since your server host is the same in both cases (localhost) and differ only on the directory being referenced.
But, if your WCF SOAP service is running on http://localhost:8080/service/GetStuff.svc and your ASP.NET site is running on http://localhost/mysite/Default.aspx (default port of 80), then the call will fail since the server host differs in the port being accessed.
The three things to consider are host, protocol (http or https) and port. According to the article, not all browsers enforce port.
I hope this helps. Good luck!
BTW, does your application work?