Converting Sum's Values into Base 10 - circuit-diagram

I am making a computer purely out of water. It's going well, but I have one problem. In the adders, where the sum exits, I just have water collected there. I don't know how to take the base two sum values and convert them back into base 10 and display it... any help would be great.

Why are you using adders, do you mean AND gated.
No adders needed.

Related

Random Rain Sounds in GameMaker

I’m making a game with GameMaker 1.4 and I’m in a dungeon room and I want to add drop sounds (like it’s damp) randomly.
Thank You!
Depending on how often you want the raindrop sound to play, you can use the random_range() function (https://docs.yoyogames.com/source/dadiospice/002_reference/maths/real%20valued%20functions/random_range.html) to variably count up to a pre-defined variable amount, or if it hits a specific number (like rolling a 1 through 4 on a 10 sided dice). Once that amount is hit, either randomly or by adding up to a threshold amount, you can just play the raindrop soundfile you have normally by using the audio_play_sound() function (https://docs.yoyogames.com/source/dadiospice/002_reference/game%20assets/sounds/audio_play_sound.html)

Graphite divideSeries wrong result

I'm using Grafana and Graphite to draw a percentage of "succes" rate in our dashboard:
I can draw the success rate
I can metric the total
But when I try to get the percentage, it doesn't seem to do the division as I was taught in school :)
Am I doing something wrong?
(I tried with both the current value of the series and the average)
Following the suggestion of #AussieDan the situation get more weird:
Drawing all the three series, I can see just 2:
Removing the total from the graph the values are consistent...
But when I leave just the ratio of the two:
Using asPercent:
Without seeing the graph of the underlying series it's tough to say what exactly is going on. If you can plot those 4 queries on a standard graph panel it might help to figure out what's happening.
The graph images help somewhat, they do show that asPercent and divideSeries are internally consistent, with the asPercent values being 100x the divideSeries values which is what we expect.
Can you grab 2 more screenshots with just success and just total so we can see what's going on there? I'm not sure I trust the screenshot where divideSeries doesn't show up despite being marked visible.

Optimization through machine learning

I've got a system that takes 15 points out of a 17 by 17 grid as input (order doesn't matter), and generates a single scalar as output. The system is not representable by a formal function.
The goal is to find the optimal 15 points so that the output scalar is minimum. Solving this problem exhaustively simply takes too much time to be practical as each run takes 14 seconds.
I've started taking a machine learning course online. But this problem does seem to be rather unsophisticated and I wonder if anyone can point me to the right direction. Any help is greatly appreciated!
Use simulated annealing. I guess this will be close to optimal here.
Therefore, start with a random distribution of the 15 points. Then, in each iteration change one and accept the new state if the resulting scalar value is lower. If it is larger, accept with a certain probability (a Boltzmann factor). Eventually you have to try this for a small number of randomly chosen initial states and afterwards accept the lowest value.

Neural Network Input and Output Data formatting

and thanks for reading my thread.
I have read some of the previous posts on formatting/normalising input data for a Neural Network, but cannot find something that addresses my queries specifically. I apologise for the long post.
I am attempting to build a radial basis function network for analysing horse racing data. I realise that this has been done before, but the data that I have is "special" and I have a keen interest in racing/sportsbetting/programming so would like to give it a shot!
Whilst I think I understand the principles for the RBFN itself, I am having some trouble understanding the normalisation/formatting/scaling of the input data so that it is presented in a "sensible manner" for the network, and I am not sure how I should formulate the output target values.
For example, in my data I look at the "Class change", which compares the class of race that the horse is running in now compared to the race before, and can have a value between -5 and +5. I expect that I need to rescale these to between -1 and +1 (right?!), but I have noticed that many more runners have a class change of 1, 0 or -1 than any other value, so I am worried about "over-representation". It is not possible to gather more data for the higher/lower class changes because thats just 'the way the data comes'. Would it be best to use the data as-is after scaling, or should I trim extreme values, or something else?
Similarly, there are "continuous" inputs - like the "Days Since Last Run". It can have a value between 1 and about 1000, but values in the range of 10-40 vastly dominate. I was going to scale these values to be between 0 and 1, but even if I trim the most extreme values before scaling, I am still going to have a huge representation of a certain range - is this going to cause me an issue? How are problems like this usually dealt with?
Finally, I am having trouble understanding how to present the "target" values for training to the network. My existing results data has the "win/lose" (0 or 1?) and the odds at which the runner won or lost. If I just use the "win/lose", it treats all wins and loses the same when really they're not - I would be quite happy with a network that ignored all the small winners but was highly profitable from picking 10-1 shots. Similarly, a network could be forgiven for "losing" on a 20-1 shot but losing a bet at 2/5 would be a bad loss. I considered making the results (+1 * odds) for a winner and (-1 / odds) for a loser to capture the issue above, but this will mean that my results are not a continuous function as there will be a "discontinuity" between short price winners and short price losers.
Should I have two outputs to cover this - one for bet/no bet, and another for "stake"?
I am sorry for the flood of questions and the long post, but this would really help me set off on the right track.
Thank you for any help anyone can offer me!
Kind regards,
Paul
The documentation that came with your RBFN is a good starting point to answer some of these questions.
Trimming data aka "clamping" or "winsorizing" is something I use for similar data. For example "days since last run" for a horse could be anything from just one day to several years but tends to centre in the region of 20 to 30 days. Some experts use a figure of say 63 days to indicate a "spell" so you could have an indicator variable like "> 63 =1 else 0" for example. One clue is to look at outliers say the upper or lower 5% of any variable and clamp these.
If you use odds/dividends anywhere make sure you use the probabilities ie 1/(odds+1) and a useful idea is to normalize these to 100%.
The odds or parimutual prices tend to swamp other predictors so one technique is to develop separate models, one for the market variables (the market model) and another for the non-market variables (often called the "fundamental" model).

precision gains where data move from one table to another in sql server

There are three tables in our sql server 2008
transact_orders
transact_shipments
transact_child_orders.
Three of them have a common column carrying_cost. Data type is same in all the three tables.It is float with NUMERIC_PRECISION 53 and NUMERIC_PRECISION_RADIX 2.
In table 1 - transact_orders this column has value 5.1 for three rows. convert(decimal(20,15), carrying_cost) returns 5.100000..... here.
Table 2 - transact_shipments three rows are fetching carrying_cost from those three rows in transact_orders.
convert(decimal(20,15), carrying_cost) returns 5.100000..... here also.
Table 3 - transact_child_orders is summing up those three carrying costs from transact_shipments. And the value shown there is 15.3 when I run a normal select.
But convert(decimal(20,15), carrying_cost) returns 15.299999999999999 in this stable. And its showing that precision gained value in ui also. Though ui is only fetching the value, not doing any conversion. In the java code the variable which is fetching the value from the db is defined as double.
The code in step 3, to sum up the three carrying_costs is simple ::
...sum(isnull(transact_shipments.carrying_costs,0)) sum_carrying_costs,...
Any idea why this change occurs in the third step ? Any help will be appreciated. Please let me know if any more information is needed.
Rather than post a bunch of comments, I'll write an answer.
Floats are not suitable for precise values where you can't accept rounding errors - For example, finance.
Floats can scale from very small numbers, to very high numbers. But they don't do that without losing a degree of accuracy. You can look the details up on line, there is a host of good work out there for you to read.
But, simplistically, it's because they're true binary numbers - some decimal numbers just can't be represented as a binary value with 100% accuracy. (Just like 1/3 can't be represented with 100% accuracy in decimal.)
I'm not sure what is causing your performance issue with the DECIMAL data type, often it's because there is some implicit conversion going on. (You've got a float somewhere, or decimals with different definitions, etc.)
But regardless of the cause; nothing is faster than integer arithmetic. So, store your values are integers? £1.10 could be stored as 110p. Or, if you know you'll get some fractions of a pence for some reason, 11000dp (deci-pennies).
You do then need to consider the biggest value you will ever reach, and whether INT or BIGINT is more appropriate.
Also, when working with integers, be careful of divisions. If you divide £10 between 3 people, where does the last 1p need to go? £3.33 for two people and £3.34 for one person? £0.01 eaten by the bank? But, invariably, it should not get lost to the digital elves.
And, obviously, when presenting the number to a user, you then need to manipulate it back to £ rather than dp; but you need to do that often anyway, to get £10k or £10M, etc.
Whatever you do, and if you don't want rounding errors due to floating point values, don't use FLOAT.
(There is ALOT written on line about how to use floats, and more importantly, how not to. It's a big topic; just don't fall into the trap of "it's so accurate, it's amazing, it can do anything" - I can't count the number of time people have screwed up data using that unfortunately common but naive assumption.)