Keep track of a user 100% sure - dynamic

I am trying to ban users that spam my service by logging their IP and blocking it.
Of course this isn't safe at all, because of dynamic IP addresses.
Is there a way to identify a user that's 100% safe?
I've heard about something called evercookie, but I was easily able to delete that, and I guess that anyone capable of changing their IP can also keep their PC clean..
Are there any other options? Or is it just not possible?

A cookie will prevent the same browser from visiting your site as long as the user doesn't delete it, or turn off cookies, or use a different browser, or reinstall their browser, or use another machine, etc.
There is no such thing as 100% safe. Spam is an ongoing problem that most websites just have to learn to deal with.
There are numerous highly secure options, mostly relying on multi-factor authentication and physical key generators like the ones RSA markets. But the real question is an economic one. The more draconian the authentication mechanism, the more quickly you kill your website as you scare off all your visitors.
More practical solutions involve CAPTCHA, forum moderation, spam-reporting affordances, etc. One particularly effective technique is to block offending content from every IP address except the one that originated it. That way, the original spammer thinks their content is still there, oblivious to the fact that no one else can see it.

Alright I get that it's impossible to 100% identify a unique visitor.
What are the things that I could do to:
- find whether someone (anonymous) is using lots of different proxies to see my content (problem here is that cookies would land on the machine of the proxy? and not the actual visitors PC?)
- identify unique (anonymous) visitors with a dynamic IP

Related

many connections on Site, but not load js, images, css

I have a web shop on the very much traffic is, but this traffic is apparently generated by bots, which load pages, but never the pictures, javascript files or css files load.
I want to lock these connections with mod-security, but I find nowhere a rule with which I can do that.
First I had tried with the firewall the IP lock, but the always come with other addresses. Undertaking is pointless.
The user agent is different ... it can not be seen that the connection of a bot comes. The only thing that is noticeable is that they never load an image or a CSS file from the page.
The accesses are however not so much and fast, so the mod_evasive does not strike.
Obviously, it is a good idea to make the page so slow, the normal user quickly give up :-( I think a competitor has a ddos ​​attack run ... who knows that already ...
Does anyone have the same problem or for this problem a mod-security rule, with which I could work?
Does somebody has any idea?
Best regards,
Holger

Nginx basic auth and number of authenticated locations

Im using basic auth in nginx, no issue there, but i would like to limit the number of distinct locations a user is authenticated,
The end goal is to prevent user sharing access data to website, since the website does real time "monitoring" of some data, i wan't that if the same user/pass combination is used from another ip, that or either both users stop getting data,
or one of them stops getting data.
I don't think that is a good idea, because a user may log in via pc and mobile phone at the same time and has two different ip addresses that way. Also http-auth isn't designed to do what you want it to. It would have to remember the ip-address and make it expire somehow, when the user leaves without logging out. Altogether would it be difficult to guess for how long the session is valid. Another problem is, that most users don't have static IPs and get disconnected by their providers every 24 hours. What happens if that occurs after a valid login?
The most popular method to deal with this kind of problems are session-cookies. These can be described as a one time password and you can use that for as long as you want or until it expires. SessionIDs are usually saved in some kind of database and making those sessions unique would not be a big deal and may therefor be what you want. Luckily the
ngx_http_auth_request_module would allow you to only implement this missing part and would bring you as close as you can get without developing your own nginx-module (see https://www.nginx.com/resources/wiki/modules/ for available modules).
On the other hand: Don't do that. Seriously. If you care for security, do not try to reinvent the wheel and use something, that has already proven. E.g. ngx_http_auth_jwt_module allows you the use of OpenID, which also sets you free from saving sensible user data on your server (because nobody wants to save passwords unless it is absolutely necessary).
Both of these methods require nginx-modules, which may not be installed on your server. If you don't have the permissions to build them, I would suggest to add that to your question, so that others can suggest solutions for non root servers.
If you want to keep it simpler you should also consider to generate download links each and every time and save ip-address and download link address in a database. Delete entries when a user downloads that file and you are done. For that to work you can use the
Content-Disposition: attachment; filename=FILENAME-HTTP-Header, so that your download.php doesn't save a file that called alike.
May be you can also find some kind of javascript to replace ngx_http_auth_jwt_module and use OpenID with http-auth. That can work, because it is possible to do the authentication with ajax as well.
Last but not least: If you still want to do http-auth, also use HTTPS, because your passwords won't be encrypted by this auth-method by default.
What you want to do is unusual so you will need to write a lot of the logic to handle the process.
Your code will need to store a User ID and IP Address pair for each loged in user and validate each attempted log in against this. As the previous answer pointed out, you will need to expire logins etc. Basically, you need to roll a session handler.
This is not impossible or particularly difficult but you need to write it yourself in one of the scripting languages available to Nginx which are either Perl, which is not recommended due to limited ecosystem in Nginx, or Lua, which is highly recommended due to the massive Nginx lua ecosystem (used by Cloudflare for instance).
You will need to compile in the 3rd party Nginx Lua Module and associated modules or just uninstall Nginx and use the Openresty Bundle which already has everything you will need included instead ... including Redis for storage if you need to scale up.
Here are some tools you can use as your building blocks
Openresty Session Library
Openresty Redis Session Library
Openresty Encrypted Session Module
Note that you can implement Openresty stuff directly in Nginx if you wish without having to run Openresty as it is just a convenient bundle of Nginx and useful module.

Apple Developer Connection log-in problems

Is there a trick to logging in to Apple Developer Connection? For the past two weeks, out of about 100 tries, I've been able to log in three times. Every other time, after a successful entry of my username and password, it takes me back to the login screen.
This happens to me on both my Macs, on Safari and Firefox, so I'm not hopeful of a solution. But I have a hard time believing that the situation is really this bad...
I am having the same problem, I have narrowed down to a problem with my ISP. Of course they will not acknowledge it, but the problem only arises when I attempt a login from home. I think they are probably using a caching proxy and something in the scheme used by apple to login->access the content makes the proxy believe it's only visiting content that is still valid. I am going slightly mad because of this.
This question and the related discussion clued me in to how to fix my problem with the same symptoms on developer.apple.com. In my case, I have multiple "teams," so after entering in my Apple ID, it takes me to a team selection page. After selecting a team, I'd just be redirected back to the login/Apple ID page.
Turns out, the login is done over HTTPS, while the team selection (and probably the bulk of other activities on developer.apple.com) are over HTTP. Our firewall load balances over a couple of Internet connections, and the HTTPS traffic was passing over a different interface than the HTTP. Evidently, this was confusing Apple's authentication mechanism. It also explains why I was occasionally able to get through -- sometimes all traffic would end up on the same interface.
Ultimately, my solution was to add a rule to the firewall to send all 17.0.0.0/8 traffic (Apple's legacy class A network) over the same interface.
Hopefully this helps someone else with a frustratingly endless login loop.

when should i use or avoid subdomains?

Recently a user told me to avoid subdomains when i can. I remember reading google consider subdomains as a unique site (is this true?). What else happens when i use a subdomain and when should i use or should not use a subdomain?
I heard cookies are not shared between subdomains? i know 2 images can be DL simultaneously from a site. Would i be able to DL 4 if i use sub1.mysite.com and sub2.mysite.com?
What else should i know?
You can share cookies between subdomains, provided you set the right parameters in the cookie. By default, they won't be shared, though.
Yes, you can get more simultaneous downloads if you store images in different subdomains. However, the other side of the scale is that the user spends more time resolving DNSs, so it's not practical to have, say, 25 subdomains to get 50 simultaneous downloads.
Another thing that happens with subdomains is that AJAX requests won't work without some effort (you CAN make them work using document.domain tricks, but it's far from straightforward).
Can't help with the SEO part, however, although some people discourage having both yoursite.com and www.yoursite.com working and returning the same content, because it "dilutes your pagerank". Not sure how true that is.
You complicate quite a few things. Collecting stats, controlling spiders, html5 storage, XSS, inter-frame communication, virtual-host setup, third-party ad serving, interaction with remote APIs like google maps.
That's not to say these things can't be solved, just that the rise in complexity adds more work and may not provide suitable benefits to compensate.
I should add that I went down this path once myself for a classifieds site, adding domains like porshe.site.com, ferrari.site.com hoping to boost rank for those keywords. In the end I did not see noticeable improvement and even worse google was walking the entire site via each subdomain, meaning that a search for ferraris might return porsche.site.com/ferraris instead of ferrari.site.com/ferraris. In short google considered each site to be duplicates but it still crawled each site every time it visited.
Again, workarounds existed but I chose simplicity and I don't regret it.
If you use sub domains to store your web sites images, javascript, stylesheets, etc. then your pages may load quicker. Browsers limit the number of simultaneous connections to each domain name. The more sub domains you use, the more connection can be made at the same time to collect the web pages content.
Recently a user told me to avoid subdomains when i can. I remember reading google consider subdomains as a unique site (is this true?). What else happens when i use a subdomain and when should i use or should not use a subdomain?
The last thing I heard about Google optimization, is that domains count for more pagerank than subdomains. I also believe that page rank calculations are per page, not per site (according to algorithm etc.). Though the only person who can really tell you is a Google employee.
I heard cookies are not shared between subdomains?
You should be able to use a cookie for all subdomains. www.mysite.com sub1.mysite.com sub2.mysite.com can all share the same cookies, but a cookie specified for mysite.com cannot be shared with them.
i know 2 images can be DL simultaneously from a site. Would i be able to DL 4 if i use sub1.mysite.com and sub2.mysite.com?
I'm not sure what you mean by DL simultaneously. Often times, a browser with a single thread will download images one at a time, even from different domains. Browsers with multiple thread configurations can download multiple items from different domains at the same time.

Is website content partitioning worth while?

In order to allow for multiple policies regarding content... security, cookies, sessions, etc, I'm considering moving some content from my sites to their own domains and was wondering what kinds of dividends it will pay off (If any).
I understand cookies are domain specific and are sent on every request (even for images) so if they grow too large they could start affecting performance, so moving static content in this way makes sense (to me at least).
Since I expect that someone out there has already done something similar, I was wondering if you could provide some feedback of the pros and the cons.
I don't know of any situation that fits your reasons that can't be controlled in the settings for the HTTP server, whether it be Apache, IIS or whatever else you might be using.
I assume you mean you want to split them up into separate hosts, ie www1.domain.com www2.domain.com. And you are correct that the cookies are host/domain specific. However, there aren't really any gains if www1 and www2 are still the same computer. If you are experiencing load issues and split it between two different servers, there could be some gains there.
If you actually mean different domains (www.domain1.com & www.domain2.com) I'm not sure what kind of benefits you would be looking for...