I want to create security for my php code in product. I heared the ioncube is used to secure our code by encode the line. How to create encoded line in ioncube and view the output in php?
No, MD5 is not reversible. It can only be cracked through brute-force or dictionary-based cryptographic attacks. There are tools that can be used to perform such attacks for known password hashes, but unless you have some serious computer resources at your disposal you would have to wait for a very long time.
Test every possbile password individually for each user. Eventually you will find a match for all of them.
You will need to spend hundreds of thousands of dollars on hardware and millions of dollars on your electricity bill, and it will take a very long time.
Wordpress deliberately tried to make this very difficult to do.
Related
I'm hoping to automate the downloading and installation of the free GeoIP databases and I want to know if there is any additional verification options avaliable given that MD5 is becoming more susceptible to pre-image attacks.
Additionaly the MD5 Sums are stored on the same server meaning any attacker breaking into that server will be able to upload potentially malicious database and have it be served without any client being the wiser.
GPG is a common verification tool, and it would be trivial to set up for most Linux users given their package managers already perform this sort of verification.
maxmind.com supports TLS SSL HTTPS on its download links (just add the 's' yourself), so be sure to keep your certificates accurate and libraries up to date and you should be as secure as is possible.
Even assuming their webserver gets hijacked, there's really no point in fretting about MD5 vs SHA vs GPG at that point as you would have no reasonable assurances or concept of the width and breadth of the attack. It might as well be an inside job intentionally perpetrated by the company themselves. maxmind makes no fitness guarantees against human or automated error, anyway, so take it under advisement.
For a free service (free database, free bandwidth, huge weekly updates) you can't exactly go begging for air-gapped fort knox rate security. TLS is already better than you'll need.
You are welcome to perform your own sanity-checking of a newly downloaded database against the previously downloaded database, to make sure any changes or corrections are nominally insignificant. Better still, you can use their GeoIP Update program or direct-download patches. This way, you are only downloading nominally insignificant updates to begin with, and can inspect them yourself before merging them into the database. And you'll be saving bandwidth for everyone.
So I have been working on a client/server application written in java. At the moment I am looking for a way to verify that the code of the client application has not be changed and then recompiled. I've been searching Google for some time without a lot of success. An educated guess would be to generate a hash value of the client's code during runtime, send it to the server and compare it with a database database entry or a variable. However I am not sure if that is the right way or even how to generate a hash of the codebase during execution in a secure way? Any suggestions would be greatly appreciated.
What would stop the nefarious user from simply having the client send the correct checksum to the server each time? Nothing.
There is currently no way to completely ensure that software running on a client computer is not running altered software. It's simply not possible to trust their software without asserting control over their hardware and software. Unfortunately, this is a situation where you should focus on software features and quality, something that benefits all users, rather than preventing a few users from hacking your software.
I have a system that requires a large amount of names and email addresses (two fields only) to be imported via CSV upload.
I can deal with the upload easily enough, how would I verify the email addresses before I process the import.
Also how could I process this quickly or as a background process without requiring the user to watch a script churning away?
Using Classic ASP / SQL server 2008.
Please no jibes at the classic asp.
Do you need to do this upload via the ASP application? If not, whatever kind of scripting language you feel most comfortable with, and can do this with the least coding time is the best tool for the job. If you need for users to be able to upload into the classic ASP app and have a reliable process to insert the valid records into the database and reject the invalid ones, your options change.
Do you need to provide feedback to the users? Like telling them exactly which rows were invalid?
If that second scenario is what you're dealing with, I would have the asp app simply store the file, and have another process, a .net service, or scheduled task or something, do the importing and report on its progress in a text file which the asp app can check. That brings you back to doing it in whatever scripting language you are comfortable with, and you don't have to deal with the http request timing out.
If you google "regex valid email" you can find a variety of regular expressions out there for identifying invalid email addresses.
In a former life, I used to do this sort of thing by dragging the file into a working table using DTS and then working that over using batches of SQL commands. Today, you'd use Integration Services.
This allows you to get the data into SQL Server very quickly, and prevent the script timing out, then you can use whatever method you prefer (e.g. AJAX-driven batches, redirection-driven batches, etc.) to work over discreet chunks of the data, or schedule it to run as a single batch (an SQL Server job) and just report on the results.
You might be lucky enough to get your 500K rows processed in a single batch by your upload script, but I wouldn't chance it.
I am open for the suggestions on the following:
have a file on S3
this file will be randomly downloaded by random people
the volume of downloads is low, maybe 200-300 at most per day, on a spike, but usually might be as low as 5-10.
file size is ~10-20mb.
I need somehow to count a) how many accesses to the file happened b) how many full (completed) downloads happened.
I believe the only good day is just have some Ruby or Node.js script. It'll count accesses, then somehow supply the file, and on final byte do the completed count.
Unfortunately, doesn't seem like a too nice of the approach.
Any better ideas?
I was also thinking about enabling access logging on S3 and then parsing logs, but that doesn't seem too good neither, as requires downloading and parsing logs.
I would stick with your first idea. Having some sort of server-side logic handling the counting stuff.
I don't know which type of clients are accessing your system, but with this approach you can parse additional data coming from your clients, like HTTP headers (if applicable), etc, and that can help you identifying the profile of your clients. That might not be useful at all to you, though.
Also, if you ever need to add more complicated logic (authentication, privileges, permissions, uploading files, etc), it will be much easier once you already have a backend application/script in place.
Would it be useful for a hacker in any way to publicly display current server stats, such as average load times and memory usage?
The only issue I can forsee is that someone attempting to DDoS the server would have a visible indication of success, or would be able to examine patterns to choose an optimal time to attack. Is this much of an issue if I'm confident in the host's anti-DDoS setup? Are there any other problems I'm not seeing (I have a bad tendancy to miss wide-open security holes sometimes...)
Also useful for doing a MITM attack at the most busy time.
So the attacker can acquire the most targets before possible detection.
Another thing I can think of is logfile 'obfuscation'. Where requests by an attacker get lost in other logged stuff.
Maybe a long shot, but it can also be used to see where your visitors are coming from (based on the time they access the website), which can be used to target your visitors in other ways.
Also to expand on the possibility of attackers DOSsing the site, they can calculate the average response time at different times of the days (when it doesn't happen automatically). Because they can put load on the server and see when the load gets less.
Yes it's useful.
It will help him to know when he can download a big chunk of data, like a backup, without being detected by traffic statistics ;)
Also he will know when he can attack, do a penetration test, bruteforce or what ever, with better chance of hiding his track in the logs.
Furthermore, if he gain access he will know, when he could collect more credit cards, passwords from users, if he had no lack with the database or it's a Xss attack etc.
Ddos is another point, that you mension it already. Memory and average load will give him the success status of the attack.