For bypassing filtering in my country, I've rented an abroad server (CentOS 5) with 256 MB of RAM. Client is Ubuntu 12.04. I run this command in client to set up the tunnel:
ssh -CNfD 1080 <user>#<server-ip>
In Firefox settings, I defined a socks proxy server:
localhost:1080
By using this method, everything works properly and I can bypass the limitations. But, the speed degrades reasonably. I don't know why. I guess some reasons and I want to share them with you and have your opinions:
If I use direct connection, most sites use http, but when I use proxy, all sites have to use the secure connection prepared by ssh. My provider may have decreased the speed of secure connections. (I think this may be the matter, but it seems that https sites not using the proxy still open faster.)
Such tunnelling essentially causes the internet speed to decrease. Maybe because of overhead which applies to secure packets or some other reason. If so, what can I replace? I have a working dedicated server.
PS. The server internet connection speed is much higher than the speed (bandwidth) between client and server.
PPS. May I set up an http tunnel? Or use some software instead of ssh to be faster and has less overhead or not to use https?
Please help me figure out what is really happening, since I'm not so familiar with these concepts.
I am afraid there is not much you can do...
Indeed it is to be expected that speed, latency and throughput decrease when you tunnel your payload data through an encryption tunnel. Reason mainly is the overhead of encryption and also, depending on the connection at hand, the modified (longer) routing. You have to take into account that most of the encryption has to be done by your tunnel endpoint, so your server in this case. If that system lacks computation power, then the result will be reduced throughput, obviously. Things like CDN also won't work the same any more.
It might very well be that your service provider throttles different types of connection. Especially in areas with high control and censorship over communication content it clearly makes sense for the authorities to prefer not encrypted payload, so payload that can be controlled and filtered. Everything that keeps people from using encryption is in their interest. So throttling encrypted communication only makes sense from their point of view. Sad, but true nevertheless.
The only thing that could have an impact is some details about your tunnel endpoint, so your server in this case. Increased computation power could reduce an bottle neck if that system shows high load cause by the encryption.
Also it's network connection is of interest, just as your local connection: the encrypted tunnel requires much more control data on the upload side compared to not encrypted traffic. Since typically the upload bandwidth is much lower than for download this could also be an issue.
Related
Will Cloudflare accelerate my websocket data transfer speed by default (without any additional configurations)?
What paid and free configurations can I use to improve my websocket connection? Will Argo help here?
What level of performance increase should I wait from these different configurations?
p.s. I know that CDN mostly concentrates about optimizing serving of static content but still I am curious will it help at least a bit with dynamic content?
CDNs accelerate static content that can be cached and distributed to servers with different geolocations. But Websockets are used to server dynamic content, so the limiting factor there is the power of the server and its geolocation.
So Cloudflare or any other CDN are not be able to accelerate websockets in the same way as they can do with static content, well yes Argo might help in certain cases. But the really limiting/problematic factor with WebSockets is your application/setup handling the requests.
There are however certain conditions under which Cloudflare can accelerate connection. Some ISP want to have extra money for better routing (Double Paid Traffic), and some data center owners refuse to pay those additional money.
So it might be that the none payed connection is slower then a routing using Cloudflare as a proxy, under the condition the Cloudflare pays for the better routing. But then its not the technical part of Cloudflare that accelerates the connection, but the contract. You might need to ask your hoster about that case.
Note that Cloudflare will reset the websocket connections now and then:
“Logs from tcpdump show that Cloudflare sends a TCP reset after 1-5 minutes, despite both client and server being in sync on packets sent in each direction” - https://community.cloudflare.com/t/websockets-disconnected-in-aws-tokyo/44680
“If you’re intending to use CF websockets, be prepared for random (and potentially massive) connection drops, and be sure you’re architected to handle these disconnects gracefully. Cloudflare rolling restarts have caused hundreds of thousands of websocket connections to be disconnected in a matter of minutes for us”; “when terminating a WebSocket connection due to releases CloudFlare now signals this action to both client and origin server by sending the 1001 status code” - https://news.ycombinator.com/item?id=11638081
“When Cloudflare releases new code to its global network, we may restart servers, which terminates WebSockets connections” - https://support.cloudflare.com/hc/en-us/articles/200169466-Using-Cloudflare-with-WebSockets#12345687
So to answer your question, Argo Smart Routing inside Cloudflare can, in theory, accelerate the websocket connections and make them more reliable. But we also know for a fact that it will lead to regular websocket disconnects.
Maybe use Cloudflare for backup connections, to improve the total resilience in the face of routing anomalies.
i was in in trouble help me by figuring out the problem I've run my website on my Apache server for quite some time now and recently ran into an issue that has me stumped.
My server has been DDOS attacked in the past requiring me to move my server behind a proxy/WAF. For some time I was behind Sucuri as it provided the best affordable defense at the time. The attacks tapered off and I moved to Cloudflare free to protect my IP address while lightening up on my monthly server costs. The switch was smooth and everything has been working fine for several months.
I was recently hit again with what seemed to be a layer 7 attack. I could see several IP addresses making 10-20 requests every couple of seconds in my domain's access.log. Running netstat returned thousands of TIME_WAIT and SYN_RECV all with Cloudflare IP addresses. This lead me to believe the attack was against my domain, being proxied by Cloudflare, and reaching my server regardless of my security settings. I confirmed this by viewing the statistics provided by Cloudflare and seeing millions of requests being made in a short time period. Unfortunately this is making it even more difficult to pinpoint the attack. what should i do.
I've enabled syn cookies, added mod_cloudflare to Apache, activated Cloudflare's WAF / rate limiting rules, blocked offending IP addresses, and used mod_evasive to automatically blacklist future offenders. This has reduced (and almost stopped) the amount of malicious requests seen in the Apache access log but has not resolved the timeouts.check site
According to Cloudflare analytics, I've only received 16,000 requests in the previous 6 hours (as opposed to the tens of millions when I was being actively attacked) but I get timeouts on every other request (even directly connecting, without Cloudflare).
Thanks
Boost proxy server security and defend against DoS attacks by blocking unsolicited packets or by using load balancers, as these actions could help reduce the impact the attack has on the server.
There are also attacks that use a proxy server on the Internet as a transit device to hide the originating source of the attack on your network. Blocking open or malicious proxy servers from accessing the network or servers is one way to prevent this type of attack from being successful
i Hope this will definitely help you
i think you have to ask your webhost or ask cloudflare support
and also raise s ticket on Sucuri. Their team closely works with the respective developers in fixing the security issues. Once fixed, Sucuri patches those vulnerabilities at the firewall level
During the attacks, website with heavy traffic like yours would slow down significantly due to the high server load. Sometimes it would even cause the server to restart causing downtime.
When you enable Sucuri, all your site traffic goes through their cloudproxy firewall before coming to your hosting server. This allows them to block all the attacks and only send you legitimate visitors.
Sucuri’s firewall blocks all the attacks before it even touches our server. Since they’re one of the leading security companies, Sucuri proactively research and report potential security issues to WordPress core team as well as third-party plugins.
If you still not resolve the problem then then it may be a different type of attack
TCP Connection Attacks
These attempt to use up all the available connections to infrastructure devices such as load-balancers, firewalls and application servers. Even devices capable of maintaining state on millions of connections can be taken down by these attacks.
Volumetric Attacks
These attempt to consume the bandwidth either within the target network/service, or between the target network/service and the rest of the Internet. These attacks are simply about causing congestion
Fragmentation Attacks
These send a flood of TCP or UDP fragments to a victim, overwhelming the victim's ability to re-assemble the streams and severely reducing performance.
Application Attacks
These attempt to overwhelm a specific aspect of an application or service and can be effective even with very few attacking machines generating a low traffic rate (making them difficult to detect and mitigate).
I'm not trying to setup a VPN. I want to secure tcp sessions between services that might be implemented in either user-mode or as kernel daemons. If it weren't for the kernel requirement, TLS would probably suffice.
First target would be Linux; pointers to any example code in user or kernel mode would be dandy if there are any.
All the existing examples I've found are about creating VPNs and use a bunch of static configuration in protected directories, all of which I'd like to avoid. I imagine I'd looking at setsockopt things to define keys before listening and connecting, but have so far found nothing.
VPN will just offer you secure tunnel for your communication also this comes with price of slow connection or overhead. IF you are looking for Ipsec be ware that programatically trying to get a secure and chaning IP itself comes with the same price of large overhead for communication.
It is important to know that what is your specific need. Like if you are not bothered about overhead or extra cost, you can definitely go ahead do IPSec at Network layer. But if you are worried about the performance issues or you want less overhead in your communication SSL/TLS is better for offering you desired security.
I've experienced a CPU usage surge coming from a WCF service that sends large files to requesting clients over HTTPS. Does TLS need to encrypt the whole file before sending it down or does it just encrypt the packets? I'm trying to find out what in the service is causing the surge as the WCF method responsible just serves files on disk. These files used to be smaller and so was the CPU load. There is only one endpoint with a binding that uses streaming and MTOM.
Regards,
F
TLS encrypts only the packets. The file you are sending is not encrypted, the communication of that file is encrypted -- it's a subtle but important difference.
Of course, using HTTPS does decrease scalability (because of server affinity caused by the HTTPS session) and performance degradation, but you can fix that by using special HTTPS hardware in your server.
SSL and TLS act at the transport layer, so anything sent over that session should be encrypted at the time of sending, and immediately decrypted upon receiving it. That means they can still be used to effectively secure streams or other open-ended communications.
Because the encryption will only happen as fast as the communication link, it should be reasonably constant. If you're seeing performance problems, it may simply be due to your files being larger, meaning proportionally more processing and time. Of course, if you have many clients requesting data at the same time, and it all needs to be encrypted, you'll soon reach the limit of either the processor or the network device. That's why web sites that support SSL often choose to secure only very specific sections, like login and password changing pages. If they secured every single request, they would get overloaded.
I need to transfer small amounts of data intermittently from clients to our server in a secure fashion and pull down large binary files from the server ocassionally. It's important for all this to be reliable. I'm anticipating 100,000 clients. I control both ends, but I want to deliver a solution that doesn't require changing the firewall for the majority of customers. A lag of one or two minutes before the information migrates to the server or comes down seems to be acceptable at this time.
We need to make the connection secure, so was thinking about SSL, but open to suggestions. Basically, what is the best binding to use in this situation so that we have a secure transmission and the system handles the stress and load in a way that works for 95% of clients out of the box (firewalls will not block in majority of firewall configurations).
Firewall: you can port sharing to some well known port, or add yourself to exception list if client is using windows firewall
Using self signed certificate on net.tcp binding using transport security would be ideal.