understanding cryptocoins mining profit calculation - bitcoin

So I've been reading about this topic for a while and am still unable to figure out somethings (or at least I need some confirmations). I looked on the internet for a mining hardware and I've chosen one to make my calculations, here is my example:
1- hardware: antminer S5 (1155 GH/s)
2- running costs (electricity (power and cooling), internet, space) are zero for now.
3- using this calculator https://www.cryptocompare.com/mining/calculator/ltc for litcoin
inputing the above gives a revenue of $ 17,011,841.74 / year!
can someone explain if this is true or whatI did wrong with this?!

The problem is that the miner you are looking at (Antminer S5) is an ASIC miner, specifically designed for SHA-256 mining. Litecoin is Scrypt mining, and requires a different type of miner.
You can not mine Litecoin with an Antminer S5. If you mined Bitcoin, you'd be looking at $22.24 / month, or $272.52 / year running your miner 24/7 with zero electricity costs.

Related

Clarification in the Ethereum White Paper

I was going through the Ethereum White Paper and it was mentioned that the scripting language implemented in bitcoin blockchain has a limitation of value-blindness and Blockchain-blindness (point 2 and 4 in the paper). I am finding it hard to comprehend what this means. It would be great if someone could help understand this with an example.
Value Blindness:
There is no way for a UTXO script to provide
fine-grained control over the amount that can be withdrawn. For
example, one powerful use case of an oracle contract would be a
hedging contract, where A and B put in $1000 worth of BTC and after 30
days the script sends $1,000 worth of BTC to A and the rest to B. This
would require an oracle to determine the value of 1 BTC in USD[Note
3], but even then it is a massive improvement in terms of trust and
infrastructure requirement over the fully centralized solutions that
are available now. However, because UTXO are all-or-nothing, the only
way to achieve this is through the very inefficient hack of having
many UTXO of varying denominations (eg. one UTXO of 2k for every k up
to 30) and having O pick which UTXO to send to A and which to B.
Blockchain-blindness
UTXO are blind to certain blockchain data such as the nonce and
previous block hash. This severely limits applications in gambling,
and several other categories, by depriving the scripting language of a
potentially valuable source of randomness.

How to secure new public proof of work blockchain?

I am fairly new to the bitcoin and blockchain technology and have recently started reading about it. So my understanding and the question bellow may not be very accurate.
As I have understood so far, proof of work is the basic building block for a bitcoin block chain and because of it, an attacker will have to produce more than 50% of the total compute power (i.e control more than 50% of the nodes)
in order to manipulate the block chain by being able to produce longest block chain consistently.
Now bitcoin guys were bit lucky as they were the first and nobody paid attention in there early days. Once bitcoin gathered momentum, number of honest nodes become predominant and system became inherently secure.
But now, how someone can start a new public blockchain (for completely different application) safely ? Because, if a new blockchain is floated with few mining nodes,
any attacker can come with more compute nodes and hijack the blockchain as there are small number of honest nodes.
it depends on what you want to do. There are many implementations of Blockchain, each of them has its objective. Bitcoin was the first implementation of Blockchain. Bitcoin is a cryptocurrency and like Bitcoin, there have been developed many other cryptocurrencies.
However, the Blockchain technology would be useful to many things: for example, to control the vote in a distributed way in the elections. Because of that, there are many implementations of Blockchains.
Hyperledger Fabric is a private Blockchain, where the acces to it must be controled
Ethereum is a public Blockchian to transfer assets. Anyone could create his tokens and start using them through the Ethereum network. So, you will use an existing Chain and attackers couldn't hijack you. I think that this would be a great start. If I were you, I'd continue reading this.
To avoid the attack you are describing (51% attack), where existing miners hijack a new network there are a couple ways to avoid this.
Merge Mining
The smaller chain includes block data in the larger chain (e.g. Bitcoin) so the blocks are mined with the hashpower of the larger network.
Change the hashing algorithm
For Bitcoin, two rounds of SHA256 is the hashing algorithm. Because there is so much mining power, it is possible to get attacked fairly easily because Bitcoin miners can just point their existing miners to the small network long enough to execute an attack, and then switch back. This happened to Bitcoin Gold recently. So, use something other than SHA256 where there is already a lot of hashpower in terms of hardware out there.

Machine Learning challenge: technique for collect the coins

Suppose that there is a company that own a couple of vending machines that collect coins. When the coin safe is full, the machine can not sell any new item. To prevent this, the company must collect the coins before that. But if the company send the technician too early, the company loses money because he made an unnecessary trip. The challenge is to predict the right time to collect coins to minimize the cost of operation.
At each visit (to collect or other operations), a reading of the level of coins in the safe is performed. This data contains historical information regarding the safe filling for each machine.
What is the best ML technique, approach to this problem computationally?
This is the two parts to the problem I see:
1) vending machine model
I would probably build a model for each machine using the historic data. Since you said a linear approach is probably not good, you need to think about things that have influence on the filling of a machine, i.e. time related things like week-day dependency, holiday dependency, etc., other influences like the weather maybe? So you need to attach these factors to the historic data to make a good predictive model. Many machine learning techniques can help creating a model and finding real data correlations. Maybe you should create despriptors from your historical data and try to correlate these to the filling state of a machine. PLS can help reducing the descriptor space and find relevant ones. Neuronal Networks are great if you really have no clue about the underlying math of a correlation. Play around with it. But pretty much any machine learning technique should be able to come up with a decent model
2) money collection
Model the cost for a random trip of the technician to a machine. Take into account the filling grade of the machines and the cost of the trip. You can send the technician on virtual collecting tours and calculate the total cost of collecting the money and the revenues from the machine. Use again maybe a neuronal network with some evolutionary strategy to find an optimum of trips and times. you can use the model of the filling grade of the machines during the virtual optimization, since you probably need to estimate the filling grade of the machines in these virtual collection rounds.
Interesting problems you have...

Time to crack DES? Is it a task suitable for a script kiddie yet?

Already understanding that AES is the encryption method of choice, should existing code that uses DES be re-written if the likely threat is on the level of script kiddies? (e.g. pkzip passwords can be cracked with free utilities by non-computer professionals, so is DES like that?) A quick google search seems to imply that even deprecated DES still requires a super computer and large quantity of time--or have times changed?
In particular, this CAPTCHA library uses DES to encrypt the challenge string which is sent to the user in viewstate.
DES is broken so far as storing sensitive data, and so I would certainly not use it in anything new, and would replace it in anything used for long term storage of any information of interest (data that someone would have a profit for national security interest in stealing).
At the moment a DES message can be broken by brute force in a couple of days (or less) using under $100,000 worth of custom hardware.
But there are some key factors in that:
The hardware is custom - the chips used to quickly brute a DES key are not the general purpose processor you'd find in a PC. That being said there is probably room today for using a cluster of Playstation 3s or current generation graphics cards with a GPGPU to crack a DES message in a reasonable amount of time, perhaps bringing down the cost to maybe $15,000.
The other factor is time - a DES message can be cracked in a day, but if your CAPTCHA library has a timestamp that specifies a 30 minute timeout for any given CAPTCHA response, it would still be effective (you could scale up your hardware, but then you're talking millions).
Overall I'd say that for non-long term storage, DES is still secure against "script kiddies".
no, DES cracking is not suitable for scriptkiddies and won't probaly be in the near forseeable future.
it requires such enormous processing power, we're talking about a load of FPGA processors.
for example the COPACOBANA in the CHES 2006 secret key challenge took 21 hours, 26 mins, 29 secs using 108 of it's 128 processors, at a troughput of 43.1852 billion keys per second, and found the key after searching trough 4.73507% of the keyspace
now, if we look at moores law we see, that if we currently build a similar machine, it'll currently take 1/4th of the time for the same amount of money, or 1/4th of the money for the same amount of time.
DES is broken by the standards of the crypto community; but the time required to break it is generally large enough that it would be 'safe' to use for this kind of application. On one assumption: the DES key changes from session to session. If the key doesn't change, then it is open to attack by a very very dedicated individual. Now, the question is, is your website subjected to people that will spend 10+ days cracking DES, rather then applying lessons learned by the rest of the Spam Industry in the way of Image Recognition.
DES is probably still good enough for most use cases. But the point is, there is normally reason to use an algorithm (or in this case rather: a key strength) which is known to be rather weak.
Wikipedia points out that even with special hardware around 9 Days are needed for an exhaustive key search. I don't think the script kiddies are likely to spend that many CPU time (even if they have a botnet) only to crack a captcha. (Actually, cracking captchas is normally A LOT easier with sufficient intelligent picture recognition...)

Commercial uses for grid computing?

I keep hearing from associates about grid computing which, from what I can gather, is highly distributed stuff along the lines of SETI#Home.
Is anyone working on these sort of systems for business use? My interest is in figuring out if there's a commercial reason for starting software development in this field.
Rendering Farms such as Pixar
Model Evaluation e.g. weather, financials, military
Architectural Engineering e.g. earthquakes.
To list a few.
Grid computing is really only needed if you have a lot of WORK that needs to be done, like folding proteins, otherwise a simple server farm will likely be plenty.
Obviously Google are major users of Grid Computing; all their search service relies on it, and many others.
Engines such as BigTable are based on using lots of nodes for storage and computation. These are commercially very useful because they're a good alternative to a small number of big servers, providing better redundancy and cost effective scaling.
The downside is that the software is fiendishly difficult to write, but Google seem to manage that one ok :)
So anything which requires big storage and/or lots of computation.
I used to work for these guys. Grid computing is used all over. Anyone who makes computer chips uses them to test designs before getting physical silicon cut. Financial websites use grids to calculate if you qualify for that loan. These days they are starting to replace big iron in a lot of places, as they tend to be cheaper to maintain over the long term.