Is there any way to control the resource for Bitlocker during encryption? - bitlocker

I have some computers which are enabled bitlocker but their performance becomes slow and unusable.
That's why I want to ask can I control the bitlocker using less ram/resources so that I can work during it's encrypting.
Thanks.

Related

Verifying GeoIP databases come from GeoIP

I'm hoping to automate the downloading and installation of the free GeoIP databases and I want to know if there is any additional verification options avaliable given that MD5 is becoming more susceptible to pre-image attacks.
Additionaly the MD5 Sums are stored on the same server meaning any attacker breaking into that server will be able to upload potentially malicious database and have it be served without any client being the wiser.
GPG is a common verification tool, and it would be trivial to set up for most Linux users given their package managers already perform this sort of verification.
maxmind.com supports TLS SSL HTTPS on its download links (just add the 's' yourself), so be sure to keep your certificates accurate and libraries up to date and you should be as secure as is possible.
Even assuming their webserver gets hijacked, there's really no point in fretting about MD5 vs SHA vs GPG at that point as you would have no reasonable assurances or concept of the width and breadth of the attack. It might as well be an inside job intentionally perpetrated by the company themselves. maxmind makes no fitness guarantees against human or automated error, anyway, so take it under advisement.
For a free service (free database, free bandwidth, huge weekly updates) you can't exactly go begging for air-gapped fort knox rate security. TLS is already better than you'll need.
You are welcome to perform your own sanity-checking of a newly downloaded database against the previously downloaded database, to make sure any changes or corrections are nominally insignificant. Better still, you can use their GeoIP Update program or direct-download patches. This way, you are only downloading nominally insignificant updates to begin with, and can inspect them yourself before merging them into the database. And you'll be saving bandwidth for everyone.

duply/duplicity -- where should I save profile data?

I am trying to set up a backup to Amazon S3 servers using duply, which is a front-end for duplicity.
When I create a duply profile, this message is returned:
IMPORTANT:
Copy the _whole_ profile folder after the first backup to a safe place.
It contains everything needed to restore your backups. You will need
it if you have to restore the backup from another system (e.g. after a
system crash). Keep access to these files restricted as they contain
_all_ informations (gpg data, ftp data) to access and modify your backups.
Repeat this step after _all_ configuration changes. Some configuration
options are crucial for restoration.
What is a reasonable way to go about doing this?
My purpose for setting up an encrypted off-site backup is that I don't want to lose all my data if there is physical damage (fire, etc.) to my home.
So, saving this information in a thumb drive doesn't seem like a good idea, since the thumb drive would also be destroyed in such an event.
Saving this information on the Amazon S3 server itself seems like it would completely compromise the encryption.
If not these two options, where does one save it?
how about tar'ing the profile folder and encrypt it with gpg (long symmetric passphrase or against your personal private key) and saving it off-site?
of course you can use anything else that can securely encrypt archives/files.
..ede/duply.net
PS: never use thumbdrives/flash based memory for archiving purposes. when not connected regularly to power they lose memory cell content because it is not refreshed.
Save it on several flash drives, put one in a bank safe deposit vault, they are not expensive.

authenticating application codebase

So I have been working on a client/server application written in java. At the moment I am looking for a way to verify that the code of the client application has not be changed and then recompiled. I've been searching Google for some time without a lot of success. An educated guess would be to generate a hash value of the client's code during runtime, send it to the server and compare it with a database database entry or a variable. However I am not sure if that is the right way or even how to generate a hash of the codebase during execution in a secure way? Any suggestions would be greatly appreciated.
What would stop the nefarious user from simply having the client send the correct checksum to the server each time? Nothing.
There is currently no way to completely ensure that software running on a client computer is not running altered software. It's simply not possible to trust their software without asserting control over their hardware and software. Unfortunately, this is a situation where you should focus on software features and quality, something that benefits all users, rather than preventing a few users from hacking your software.

How can I handle 200K request per sec in wcf

I need to design a system that can handle 200K request per second in each machine over HTTP.
The wcf service need to be hosted under win service.
I wonder if wcf can handle such a requirement?
What is the best system setup/ best configuration?
The machine itself is pretty heavy 32G RAM and 8 core (or more), and can be upgraded if needed
Can I handle such amount of request in each single machine with wcf using http?
Doing this on a single machine is likely to be pretty tough (if indeed it's possible). It would be better to make your system scale horizontally, so you can add lots of machines as required. How you do that will depend on what your system actually needs to do. If it's some simple calculation which requires no persisted state, it shouldn't be too hard. If you've got some interaction with storage of some form which really needs to be read/written on each request, it'll be a lot harder - and choosing your persistence technology is likely to be pretty key to making it all hang together.
Note that there are other benefits to scaling horizontally too - in particular, the ability to upgrade the system without any downtime (if you're careful) and removing a huge single point of failure.
You need to give some more info on this.
Do you get the request and have to process it immediately?
Can you store the request data and delegate the processing to some other thread/process? Is there any way to scale the system out instead of up?
Is this in fact the only piece of infrastructure you can deploy stuff to?
I would start by asking what is it that I want to do during request handling. then what the bottlenecks are going to be.

Leaving SQL Management open on the internet

I am a developer, but every so often need access to our production database -- yeah, poor practice, but anyway... My boss doesn't want me directly on the box using RDP, and so we decided to just permit MS SQL Management Console access so that I can do my tasks. So right now we have the SQL box somewhat accessible on the internet (on port 1433 if I am not mistaken), which opens a security hole. But I am wondering, how much of an uncommon practice is this, and what defaults should I be concerned about? We use MSSQL2008 and I created an account that has Read-Only access, because my production tasks only need that. I didn't see any unusual default accounts with default passwords on the system, so I would be interested to hear your take. (And of-course, is there a better way?)
Exposing a database or RDP directly to the Internet, even if locked down, is akin to putting up a sign saying "do not enter" - the security provided is not significant (and more importantly, could disappear tomorrow when an exploit is discovered).
A VPN is akin to actually locking the door - although security holes are sometimes discovered in VPN software, they are much rarer, as security is a primary concern there (as opposed to e.g. database servers, where it's mostly an afterthought). As for stability, I've never encountered this problem with a VPN server under such a small load (occasional access by a few users).
Bottom line: Unless you need to expose it to everyone (e.g. a web server), don't put it directly on the Internet.
BTW, are you sure your database server has not been hacked? In my experience, it means "didn't notice it", or at best "not hacked yet" - either way, that's a far cry from "reasonably secure".