I'm testing a web application using Apache AB; I love it but my web hosting service does not: if I increase the concurrency level over 30/40 I start getting 503 errors. The reason (according to the web hosting support) is that all the requests come from the same IP; do you know any other similar tool which can also simulate concurrent requests coming from different IPs?
Do you think the rule adopted by the hosting service is correct? For Optical Fiber internet connections, having many people who share the same IP is not so uncommon.
I'm not sure, but you can try to use Apache JMeter. Just run it remotely from different IP
Related
Mixpanel is using "SoftLayer" which blocks all the request from IPs coming from Iran. Is there a workaround to redirect these request to IPs in another country to be able to bypass their filter and send the data to Mixpanel?
There are multiple ways depending on your configuration and platform
what is your hosting? If its shared then your options are limited but if you deployed your application on a dedicated server or VPS you can route your traffic via transparent proxies or through a vpn tunnel. And there are many services for that either!
for example Squid is a well-documented and easy to use service for that! But keep in mind that it works better on linux! you can read these articles for configuring a transparent proxy with squid: On Ubuntu, On CentOS
But given the circumstances I recommend using an open-source analytical system such as:
Matomo (formerly known as Piwik)
Open Web Analytics
Heap (a famous iranian event site (Evand) was using Heap)
You can connect through a VPN tunnel. It works the way that you connect to a computer somewhere else (in your case in another country) and then you connect from that computer to the rest of the internet. So from the rest of the internet it looks like you're somewhere else.
You can check out ProtonVPN, they have VPN tunnels through a bunch of countries.
I created an api using SLIM framework, but the services are not accessible to public as they are limited to localhost. how to host the services on a realtime server, so that, they can be accessible from anywhere?
please some one help me
This question requires more detail in order to answer properly.
If you are hosting your API on a windows server, then it is likely you have configured some kind of "WAMP" stack, correct? Or maybe serving PHP through IIS? This are important questions because we need to know what port you have bound your web application server to, which leads us to the next question...
Where are you hosting the server which is running the application which bound to what port?
Ultimately, a public, external IP will need to be either:
a. NAT'ed to the internal IP of your web server instanced
b. Port-forwarded to the internal IP of the server running your web application
Still, we are making a lot of assumptions here because getting a web application "accessible from anywhere" will require different work depending on your environment.
Here is the most basic example:
You are at home, running this API on your Windows workstation and will like to be able to hit it from a remote location.
Ensure Windows firewall allows inbound traffic to the port on which your application is running (probably port 80/HTTP, maybe 443/HTTPS).
Log into your ISP's router and configure port-forwarding to ensure inbound traffic on, say, port 80, is routed to the internal IP of the workstation running the API.
That's all there is to it.
Keep in mind that this also assumes that your ISP even allows you to expose your own web server to the internet on port 80 (or 443). Also, since we know nothing about your environment, this is all pure conjecture. Please provide more information you would like a real answer.
The most traditional way to host Slim Framework, would be through Apache. Install Apache and be sure you have the proper network settings to allow inbound connections, but more information about your setup could be needed for proper guidance.
http://httpd.apache.org/docs/2.4/platform/windows.html
When Apache is installed and working, you need to set Rewrite rules on the URL, information on that can be found on http://docs.slimframework.com/routing/rewrite/.
Your question on the verge of off topic, it probaly is, but read up on what questions can be asked and not, here on Stackoverflow, hope i could help.
I have an unusual scenario, where I am trying to scale a WCF service that isn't thread safe. I have four instances of the service running on a single 4-core server, in four separate IIS web sites, with CPU affinity enabled. The sites are bound to ports 8022, 8023, 8024 and 8025.
My question is: can I use Application Request Routing (ARR) to load balance requests to a single port (80) across these four sites?
As far as I know you can't use the Webfarm-Framework for balancing between different ports on one Server. Maybe because from a failure safety perspective it dosn't make sence.
A workaround is to add some additional IPs to your Webserver and configure your 4 web sites bindings to listen on the same port but on different IPs.
So you cann set up a web farm with 4 different IPs as servers which in fact are locatet on the same physical machine.
hope this helps.
Best regards,
Peter
I've been trying to mug up on Glassfish and one thing that keeps coming up is the "how-to" on fronting Glassfish with Apache. Unfortunately, I have yet to find a description of why you would want to do this!
From my experimentation, Glassfish seems like a pretty fully featured web server-type service; but I might be missing a lot. So, is the notion of front-ending Glassfish more of a solution to integrate it with an existing architecture, or does front-ending (in a pure Java environment) provide extra benefits?
There's also another valid use case as to why we front Glassfish with Apache. Apache in this instance would function as a reverse proxy for increased security of your Glassfish. The RP is configured to allow only certain URLs to be passed through to the application server. For e.g., you may have app contexts /myApp and /myPrivApp deployed in Glassfish. In the RP server, you only configure /myApp to be passed to Glassfish. Anybody requesting for /myPrivApp would see a 404 'cos the request stops right at the RP level.
In one of my deployments, I have a bunch of WARs deployed, some for users coming from the internet, some for intranet only. I have 2 RPs running, one for internet users and the other for intranet. I configure the internet RP to only allow URLs for approved internet applications to pass through while intranet users get to see everything.
Hope that helps.
It is usually used to speed things up. Since apache is a very fast web server it is used to deliver static content. Like images, CSS files and so on. Glassfish serves the dynamic content (servlets, JSPs) in this scenario.
Another reason for using Apache as a frontend to Glassfish is the possibility to provide load balancing across a Glassfish cluster. See http://tiainen.sertik.net/2011/03/load-balancing-with-glassfish-31-and.html for details.
A other reason is that glassfish cannot run (easily) on port 80, without giving it root rights of course.
So, for most users it's easer to run a proxy (apache, nginx, varnish) some sort in front of apache and have both servers run under a normal user.
Then you have a other advantage of some configurations options of your front end. Like others mentioned, caching for example.
From what I understand, if you have multiple web servers, then you need some kind of load balancer that will split the traffic amongst your web servers.
Does this mean that the load balancer is the main connecting point on the network? ie. the load balancer has the IP address of the domain name?
If this is the case, it makes it really easy to add new hardware since you don't have to wait for any dns propogation right?
There are several solutions to this "problem".
You could round-robin at the DNS-level. I.e. have www.yourdomain.com point to several IP-addresses (well all your servers).
This doesn't give you any intelligence in the load balancing, but the load will be more or less randomly distributed, but you wouldn't be resilient to hardware failures as they would still require changes to DNS.
On the other hand you could use a proxy or a loadbalancing proxy that has a single IP but then distributes the traffic to several back-end boxes. This gives you a single point of failure (the proxy, you could of course have several proxies to defeat that problem) and would also give you the added bonus of being able to use some metric to divide the load more evenly and intelligently than with just round-robin dns.
This setup can also handle hardware failure in the back-end pretty seamlessly. The end user never sees the back-end, just the front-end.
There are other issues to think about as well, if your page uses sessions or other smart logic, you can run into synchronisation problems when your user (potentially) hits different servers on every access.
It does (in general). It depends on what server OS and software you are using, but in general, you'll hit the load balancer for each request, and the load balancer will then farm out the work according to the scheme you have in place (round robin, least busy, session controlled, application controlled, etc...)
andy has part of the answer, but for true load balancing and high availability you would want to use a pair of hardware load balancers like F5 bigips in an active passive configuration.
Yes your domain IP would be hosted on these devices and traffic would connect firstly to those devices. Bigips offer a lot of added functionality including multiple ways of load balancing and some great url rewriting, ssl acceleration, etc. It also allows you to run your web servers on a seperate non routable address scheme and even run multiple sites on different ports with the F5's handling the translations.
Once you introduce load balancing you may have some other considerations to take into account for your application(s) like sticky sessions and session state but that is a different subject