What is a bitcoin? [closed] - bitcoin

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
The critical word in my question title is a.
I am not asking "what is bitcoin?". There are numerous articles that show up when a Google search is performed with that question, and they all involve in summarizing what decentralized currency means, what the blockchain is, how mining generally works etc.
However I find that none actually answer the question of "what is a bitcoin"?
If my child were to ask me "what is a dollar"?, I can pull out a one dollar bill or coin for him to hold and examine.
What can I give him to examine when he asks me what a bitcoin is? (Not counting the physical bitcoin makers) Is there a string of digits I can print out and say "this! this is a bitcoin!" , or do I point to some entry on a public ledger and say "there! That's a record of Dad's bitcoin!"
Essentially, I would like to know, what is the actual tangible representation of a bitcoin, or any cryptocurrency unit?

Basically, a bitcoin can be seen as a measure of work - of "mining" - done by other users. As you probably already know, the entire bitcoin system is pretty much depends on it's users constantly doing difficult computations, sealing and ensuring transactions between users. To ensure that users keep doing those computations (which are quite expensive, by the way, mostly due to electricity and hardware costs), for every block made it's creators are awarded a certain amount of bitcoins, simply for the mere presence of their block in the blockchain. To own a bitcoin is to being able to say "I've been given this bitcoin by that guy, who got it from that guy, who got it from that guy, who got it from lets-skip-these-hundreds-of-transactions from the guy who made that block (very deeply buried inside the blockchain by now) which means he was either very lucky or had a lot of computing power to make that lucky hash and release the block before others could do it". And since every user has a full copy of blockchain on their computer (hopefully correct and agreed by majority), every single transaction ever made can be traced to the source - to the bitcoins awarded for mining a block - and your claim can be easily verified.
Although that's a little bit simplified - since, as in practice, transactions are done in tiniest fractions of bitcoins, the bitcoins you have will most likely originate from different block makers, but in general the concept is the same.

Bitcoin is generated as a compensation for mining (running cryptographic calculations to maintain the integrity of transactions within the bitcoin network).
Every block in the blockchain has a specific value (in amounts of bitcoin) and this value is attributed to the bitcoin address(es) who submitted this block successfully.
From the bitcoin wiki:
The number of Bitcoins generated per block starts at 50 and is halved every 210,000 blocks (about four years).
Their value to the outside world depends on what people agree it’s worth (free market if you will).

Related

How to automate tokenomics research and data collection?

I want to create a tokenomics database.
I want to build a bot to automate data collection that answers for each token on the blockchain some of the questions asked and answered in this article.
I am a total newbie when it comes to blockchain. So, I have some basic questions.
Where is this information located? How did the author discover the numbers he quotes? Is there an API one could use to collect this information? (See update.)
Update: In writing this question, I discovered this Etherscan API. How might one leverage that API to obtain the tokenomics data I want?
Source [emphasis mine]
There will only ever be 21,000,000 bitcoin, and they’re released at a rate that gets cut in half every four years or so. Roughly 19,000,000 already exist, so there are only 2,000,000 more to be released over the next 120 years.
What about Ethereum? The circulating supply is around 118,000,000, and there’s no cap on how many Ether can exist. But Ethereum’s net emissions were recently adjusted via a burn mechanism so that it would reach a stable supply, or potentially even be deflationary, resulting in somewhere between 100-120m tokens total. Given that, we shouldn’t expect much inflationary pressure on Ether either. It could even be deflationary.
Dogecoin has no supply cap either, and it is currently inflating at around 5% per year. So of the three, we should expect inflationary tokenomics to erode the value of Doge more than Bitcoin or Ethereum.
The last thing you want to consider with supply is allocation. Do a few investors hold a ton of the tokens which are going to be unlocked soon? Did the protocol give most of its tokens to the community? How fair does the distribution seem? If a bunch of investors have 25% of the supply and those tokens will unlock in a month, you might hesitate before buying in.

Tools for real-time data visualization in a table? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
So this might be a bit of a strange one, but I'm trying to find a tool that would help me visualize real time data in a form of a table rather than a graph/chart. There are a lot of tools out there like Grafana, Kibana, Tableau that kind of fit a similar purpose, but for a very different application and they're primarily made for aggregated data visualization.
I am essentially looking to build something like what a departure board is at an airport. You got flight flight AAA that landed 20 minutes ago, XXX departing in 50 minutes, once flight AAA is clear it disappears from the departure board etc. Only I want to have that real-time, as the input will be driven by actions users are performing on the shop floor on their RF guns.
I'd be connecting to a HANA database for this. I know it's definitely possible to build it using HTML5, Ajax and Websocket but before I get on the journey of building it myself I want to see if there's anything out there that somebody else has already done better.
Surely there's something there already - especially in the manufacturing/warehousing space where having real-time information on big screens is of big benefit?
Thanks,
Linas M.
Based on your description I think you might be looking for a dashboard solution.
Dashboards are used in many scenarios, especially where an overview of the past/current/expected state of a process is required.
This can be aggregated data (e.g. how long a queue is, how many tellers are occupied/available, what the throughput of your process is, etc.) or individual data (e.g. which cashier desk is open, which team player is online, etc.).
The real-time part of your question really boils down to what you define to be real-time.
Commonly, it’s something like “information delivery that is quick enough to be able to make a difference”.
So, if, for example, I have a dashboard that tells me that I will likely be short of, say, service staff tomorrow evening (based on my reservations) then to make a difference I need to know this as soon as possible (so I can call more staff for tomorrows shift). It won’t matter much if the data takes 5 or 10 minutes from the system entry to the dashboard, but when I only learn about it tomorrow afternoon, that’s too late.
Thus, if you’re after a dashboard solution, then there are in fact many tools available and you already mentioned some of them.
Others would be e.g. SAP tools like Business Objects Platform or SAP Cloud Analytics. To turn those into “real-time” all you need to do is to define how soon the data needs to be present in the dashboard and set the auto-refresh period accordingly.

How to pay your non-Western users? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a question similar to this one, only that my question is focused on "non-Western" users (with this I refer to users outside of Western Europe and the US).
I have to pay users of my website (for services rendered for instance), and they are located at places where banking systems are poor to say the least. They do have ATMs, and credit cards (Visa, Mastercard, etc) work in most of these countries.
After many hours of browsing the web looking into this, I figure my best bet is to go with Prepaid Debit Cards. They allow me to deposit onto the cards, and my users to simply withdraw or pay for things using that card. In fact, several of those services were mentioned in the post I linked before. These were mentioned:
Payoneer: on paper their service looks good, but I have not yet received any reply to several inquiries made, their registration form is buggy, and their 'news' section mostly has news from 2008. All red flags to me.
iKobo: another provider named in the other topic and at Wikipedia (for what it's worth...). However, their SSL certificate is expired. Big red flag.
I've gone over most of the cards mentioned at this review site, but they all appear to be tailored to the US.
So my question is: does anybody know a good payment solution (could be Prepaid Debit Cards, could be something else) that is suitable for paying a wide audience of international users?
NOTE: these are mostly larger payments in the range of $100-500.
In the UK there are two providers: Caxton (Visa) and Fairfx (Mastercard). Their cards are called currency cards rather than prepaid debit cards, but I believe they are exactly what you are describing. Both are fully registered under UK financial law, so are reputable and reliable. Both are usable in a very wide range of countries. They are both usable in many, though not all ATMs (for instance in Thailand, they are usable in ATMs in local supermarkets, but not in local money changing kiosks). In addition to the problems #hol mentions about ATMs, in Asia in particular, local banks can, and do, choose to stop receiving payments from one of the two major networks - usually Mastercard - if there has been a high level of fraud on that network in that particular country.
I believe Caxton also offer a variety of money transfer options at low cost, but I have not used these services.
I have used both providers in travelling round 11 countries this year, mostly in the developing world, including Laos which has the least developed financial system of all the countries visited. They have provided a reliable and useful service. I have no other connection with either provider.
Whatever method you choose, you should be very careful not to fall over laws against money laundring. I am not sure sending around prepaid debit cards is legal.
My own instant idea was bankwire or paypal (or Skrill - moneybookers). Western union I remember as expensive but I might be wrong.
Working in international payments for banks in my professional career I know how money moves around the world and I must say the pre-paid debit card idea is a not too bad an idea and actually quite innovative. I looked at the payoneer thing and I think it looks OK.
I wonder if there are hidden costs but does not look like it. The only costs not visible is for sure the extra charge from the ATM provider in the foreign country charges in addition to the pre-paid debit card provider.
One thing came to my mind: Make sure the receiver can use the card at ATM's in his country. Nowadays almost all cards can receive money at any ATM in the whole world but I would not take it for granted. In worst case the payee has to go to a big airport where ATM machines are more internationalized. Ask the payee what symbols/names his nearest ATM shows. Then ask the hotline if the card can receive money at that ATM. E.g. is EFTPOS supported etc.
On bankwire it depends on the receiving country and some countries tend to do a rip off in charges. I guess you are in the US. I do not know if banks provide the service but UBS Switzerland has a "guaranteed OUR charge" that allows you to make bankwires anywhere in the world where the receiver pays nothing and you are charged a max of 20 CHF plus the regular bankwire charge (I think around 15 CHF to foreign countries). Foreign bankwires from the US also seem to be ridiculously expensive. I just looked up what Bank of America charges. OMG.
Bankwire I would discourage for some countries. I had some bad experience with a certain country once and maybe times changed. Check with the payee first and rather make a test with a smaller amount if it is a rather to corruption tending country. They may think this is money against something exported and want papers and all that before releasing the money...
Of course you still can go back to cash or cheques which have the problem to be very slow and get lost in the mail (but your cards also need to be mailed) and cheques in international traffic tend to have big charges and cash has poor exchange rates.
At last, you should really ask the payees for suggestions, too.
Couldn't you ask some of your users, or people in the countries that are in similar circumstances?
And Western Union Money Transfer is widely known for being able to pay people in a lot of countries.
I would use paypal or, if they accept it, bitcoin.
Skrill (formerly MoneyBookers) may also be an option as they provide an accompanying MasterCard pre-paid card. https://www.moneybookers.com/

What are the most useful software development metrics? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to track metrics that can be used to improve my team’s software development process, improve time estimates, and detect special case variations that need to be addressed during the project execution.
Please limit each answer to a single metric, describe how to use it, and vote up the good answers.
(source: osnews.com)
ROI.
The total amount of revenue brought in by the software minus the total amount of costs to produce the software. Breakdown the costs by percentage of total cost and isolate your poorest performing and most expensive area in terms of return-on-investment. Improve, automate, or eliminate that problem area if possible. Conversely, find your highest return-on-investment area and find ways to amplify its effects even further. If 80% of your ROI comes from 20% of your cost or effort, expand that particular area and minimize the rest by comparison.
Costs will include payroll, licenses, legal fees, hardware, office equipment, marketing, production, distribution, and support. This can be done on a macro level for a company as whole or a micro level for a team or individual. It can also be applied to time, tasks, and methods in addition to revenue.
This doesn't mean ignore all the details, but find a way to quantify everything and then concentrate on the areas that yield the best (objective) results.
Inverse code coverage
Get a percentage of code not executed during a test. This is similiar to what Shafa mentioned, but the usage is different. If a line of code is ran during testing then we know it might be tested. But if a line of code has not been ran then we know for sure that is has not been tested. Targeting these areas for unit testing will improve quality and takes less time than auditing the code that has been covered. Ideally you can do both, but that never seams to happen.
"improve my team’s software development process": Defect Find and Fix Rates
This relates to the number of defects or bugs raised against the number of fixes which have been committed or verified.
I'd have to say this is one of the really important metrics because it gives you two things:
1. Code churn. How much code is being changed on a daily/weekly basis (which is important when you are trying to stabilize for a release), and,
2. Shows you whether defects are ahead of fixes or vice-versa. This shows you how well the development team is responding to defects raised by the QA/testers.
A low fix rate indicates the team is busy working on other things (features perhaps). If the bug count is high, you might need to get developers to address some of the defects.
A low find rate indicates either your solution is brilliant and almost bug free, or the QA team have been blocked or have another focus.
Track how long is takes to do a task that has an estimate against it. If they were well under, question why. If they are well over, question why.
Don't make it a negative thing, it's fine if tasks blow out or were way under estimated. Your goal is to continually improve your estimation process.
Track the source and type of bugs that you find.
The bug source represents the phase of development in which the bug was introduced. (eg. specification, design, implementation etc.)
The bug type is the broad style of bug. eg. memory allocation, incorrect conditional.
This should allow you to alter the procedures you follow in that phase of development and to tune your coding style guide to try to eliminate over represented bug types.
Velocity: the number of features per given unit time.
Up to you to determine how you define features, but they should be roughly the same order of magnitude otherwise velocity is less useful. For instance, you may classify your features by stories or use cases. These should be broken down so that they are all roughly the same size. Every iteration, figure out how many stories (use-cases) got implemented (completed). The average number of features/iteration is your velocity. Once you know your velocity based on your feature unit you can use it to help estimate how long it will take to complete new projects based on their features.
[EDIT] Alternatively, you can assign a weight like function points or story points to each story as a measure of complexity, then add up the points for each completed feature and compute velocity in points/iteration.
Track the number of clones (similar code snippets) in the source code.
Get rid of clones by refactoring the code as soon as you spot the clones.
Average function length, or possibly a histogram of function lengths to get a better feel.
The longer a function is, the less obvious its correctness. If the code contains lots of long functions, it's probably a safe bet that there are a few bugs hiding in there.
number of failing tests or broken builds per commit.
interdependency between classes. how tightly your code is coupled.
Track whether a piece of source has undergone review and, if so, what type. And later, track the number of bugs found in reviewed vs. unreviewed code.
This will allow you to determine how effectively your code review process(es) are operating in terms of bugs found.
If you're using Scrum, the backlog. How big is it after each sprint? Is it shrinking at a consistent rate? Or is stuff being pushed into the backlog because of (a) stuff that wasn't thought of to begin with ("We need another use case for an audit report that no one thought of, I'll just add it to the backlog.") or (b) not getting stuff done and pushing it into the backlog to meet the date instead of the promised features.
http://cccc.sourceforge.net/
Fan in and Fan out are my favorites.
Fan in:
How many other modules/classes use/know this module
Fan out:
How many other modules does this module use/know
improve time estimates
While Joel Spolsky's Evidence-based Scheduling isn't per se a metric, it sounds like exactly what you want. See http://www.joelonsoftware.com/items/2007/10/26.html
I especially like and use the system that Mary Poppendieck recommends. This system is based on three holistic measurements that must be taken as a package (so no, I'm not going to provide 3 answers):
Cycle time
From product concept to first release or
From feature request to feature deployment or
From bug detection to resolution
Business Case Realization (without this, everything else is irrelevant)
P&L or
ROI or
Goal of investment
Customer Satisfaction
e.g. Net Promoter Score
I don't need more to know if we are in phase with the ultimate goal: providing value to users, and fast.
number of similar lines. (copy/pasted code)
improve my team’s software development process
It is important to understand that metrics can do nothing to improve your team’s software development process. All they can be used for is measuring how well you are advancing toward improving your development process in regards to the particular metric you are using. Perhaps I am quibbling over semantics but the way you are expressing it is why most developers hate it. It sounds like you are trying to use metrics to drive a result instead of using metrics to measure the result.
To put it another way, would you rather have 100% code coverage and lousy unit tests or fantastic unit tests and < 80% coverage?
Your answer should be the latter. You could even want the perfect world and have both but you better focus on the unit tests first and let the coverage get there when it does.
Most of the aforementioned metrics are interesting but won't help you improve team performance. Problem is your asking a management question in a development forum.
Here are a few metrics: Estimates/vs/actuals at the project schedule level and personal level (see previous link to Joel's Evidence-based method), % defects removed at release (see my blog: http://redrockresearch.org/?p=58), Scope creep/month, and overall productivity rating (Putnam's productivity index). Also, developers bandwidth is good to measure.
Every time a bug is reported by the QA team- analyze why that defect escaped unit-testing by the developers.
Consider this as a perpetual-self-improvement exercise.
I like Defect Resolution Efficiency metrics. DRE is ratio of defects resolved prior to software release against all defects found. I suggest tracking this metrics for each release of your software into production.
Tracking metrics in QA has been a fundamental activity for quite some time now. But often, development teams do not fully look at how relevant these metrics are in relation to all aspects of the business. For example, the typical tracked metrics such as defect ratios, validity, test productivity, code coverage etc. are usually evaluated in terms of the functional aspects of the software, but few pay attention to how they matter to the business aspects of software.
There are also other metrics that can add much value to the business aspects of the software, which is very important when an overall quality view of the software is looked at. These can be broadly classified into:
Needs of the beta users captured by business analysts, marketing and sales folks
End-user requirements defined by the product management team
Ensuring availability of the software at peak loads and ability of the software to integrate with enterprise IT systems
Support for high-volume transactions
Security aspects depending on the industry that the software serves
Availability of must-have and nice-to-have features in comparison to the competition
And a few more….
Code coverage percentage
If you're using Scrum, you want to know how each day's Scrum went. Are people getting done what they said they'd get done?
Personally, I'm bad at it. I chronically run over on my dailies.
Perhaps you can test CodeHealer
CodeHealer performs an in-depth analysis of source code, looking for problems in the following areas:
Audits Quality control rules such as unused or unreachable code,
use of directive names and
keywords as identifiers, identifiers
hiding others of the same name at a
higher scope, and more.
Checks Potential errors such as uninitialised or unreferenced
identifiers, dangerous type casting,
automatic type conversions, undefined
function return values, unused
assigned values, and more.
Metrics Quantification of code properties such as cyclomatic
complexity, coupling between objects
(Data Abstraction Coupling), comment
ratio, number of classes, lines of
code, and more.
Size and frequency of source control commits.

How do I calculate the "cost" of a crash? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Background:
Some time ago, I built a system for recording and categorizing application crashes for one of our internal programs. At the time, I used a combination of frequency and aggregated lost time (the time between the program launch and the crash) for prioritizing types of crashes. It worked reasonably well.
Now, The Powers That Be want solid numbers on the cost of each type of crash being worked on. Or at least, numbers that look solid. I suppose I could use the aggregate lost time, multiplied by some plausible figure, but it seems dodgy.
Question:
Are there any established methods of calculating the real-world cost of application crashes? Or failing that, published studies speculating on such costs?
Consensus
Accuracy is impossible, but an estimate based on uptime should suffice if it is applied consistently and its limitations clearly documented. Thanks, Matt, Orion, for taking time to answer this.
The Powers That Be want solid numbers on the cost of each type of crash being worked on
I want to fly in my hot air balloon to Mars, but it doesn't mean that such a thing is possible.
Seriously, I think you have a duty to tell them that there is no way to accurately measure this. Tell them you can rank the crashes, or whatever it is that you can actually do with your data, but that's all you've got.
Something like "We can't actually work out how much it costs. We DO have this data about how long things are running for, and so on, but the only way to attach costs is to pretend that X minutes equals X dollars even though this has no basis in reality"
If you just make some bullcrap costing algorithm and DON'T push back at all, you only have yourself to blame when management turns around and uses this arbitrary made up number to do something stupid like fire staff, or decide not to fix any crashes and instead focus on leveraging their synergy with sharepoint portal internet web sharing love server 2013
Update: To clarify, I'm not saying you should only rely on stats with 100% accuracy, and just give up on everything else.
What I think is important is that you know what it is you're measuring. You're not actually measuring cost, you're measuring uptime. As such, you should be upfront about it. If you want to estimate the cost that's fine, but I believe you need to make this clear..
If I were to produce such a report, I'd call it the 'crash uptime report' and maybe have a secondary field called "Estimated cost based on $5/minute." The managers get their cost estimate, but it's clear that the actual report is based on the uptime, and cost is only an estimate, and how the estimate works.
I've not seen any studies, but a reasonable heuristic would be something like :
( Time since last application save when crash occurred + Time to restart application ) * Average hourly rate of application operator.
The estimation gets more complex if the crashes have some impact on external customers such, or might delay other things (i.e. create a bottle neck such that another person winds up sitting around waiting because some else's application crashed).
That said, your 'powers that be' may well be happy with a very rough estimate so long as it's applied consistently and they can see how it is changing over time.
There is a missing factor here .. most applications have a 'buckling' factor where crashes suddenly start "costing" a lot more because people loose confidence in the service your app is providing. Once that happens then it can be very costly to get users back to trusting and using the system.
It depends...
In terms of cost, the only thing that matters is the business impact of the crash, so it rather depends on the type of application.
For may applications, it may not be possible to determine business impact. For others, there may be meaninful measures.
Demand-based measures may be meaningful - if sales are steady then down-time for a sales app may be useful. If sales fluctuate unpredictable, then such measures are less useful.
Cost of repair may also be useful.