How to get historical data on total near staked? - api

I am trying to figure out how to best aggregate this data... What was the total amount of NEAR staked on the entire network (not just one validator, but all of them) at a particular point in time?
I thought about getting the number of NEAR staked from each active validator and then adding those numbers together, but this seems cumbersome to do. Is there a simple way to do this?
Thanks.

Related

Derived Table Error: "The multi-part identifier could not be bound"

I'm having trouble getting the results I would like from the query I've built. The overall goal I'm trying to accomplish is to get the first odometer reading of the month and the last odometer reading of the month for a specific vehicle. I would then like to subtract the two to get total miles driven for that month. I figured a derived table with window functions would best help to accomplish this goal (see example SQL below).
SELECT
VEHICLE_ID2_FW
FROM
(SELECT
VEHICLE_ID2_FW,
LOCATION_CODE_FW,
MIN(ODOMETER_FW) OVER(PARTITION BY YEAR(DATE_FW), MONTH(DATE_FW)) AS MIN_ODO,
MAX(ODOMETER_FW) OVER(PARTITION BY YEAR(DATE_FW), MONTH(DATE_FW)) AS MAX_ODO
FROM
GPS_TRIPS_FW) AS G
I keep running into an issue where the derived table's query, by itself, runs and
works. However, when I bracket it in the FROM clause it shoots back an the error
The multi-part identifier could not be bound
Hoping that I could get some help figuring this out and maybe finding an overall better way to accomplish my goal. Thank you!
Odometers only increase (well, that should be true). So just use aggregation:
select VEHICLE_ID2_FW, year(date_fw), month(date_fw),
min(ODOMETER_FW), max(ODOMETER_FW),
max(ODOMETER_FW) - min(ODOMETER_FW) as miles_driven_in_month
from GPS_TRIPS_FW
group by VEHICLE_ID2_FW, year(date_fw), month(date_fw);
This answers the question that you asked. I don't think it solves your problem, though, because the total miles driven per month will not add up to the total miles driven. The issue are the miles driven between the last record at the end of the month and the first at the beginning of the next month.
If this is an issue, ask another question. Provide sample data, desired results, and an appropriate database tag.

Limit to how many fields a '*' can hold in Teradata?

I have wasted about two hours messing around with character field lengths in my data to get around an error about 'Right truncation of string data' turns out it seems to be to do with the table '*' function.
It appears as if this operator can only hold a certain number of fields before throwing an error. Does any anyone know if this is the case? I am working on a large series of tables with hundreds of columns and manually stating them at each step in my job makes maintenance much more difficult. If this is a know issue, is there a way around it?
Current versions of Teradata are limited to 2048 columns per table.
To check database limits please refer to “SQL Reference: Fundamentals, Appendix C”.
If that is not your case, please provide some more info with your measurements.

Is it worth introducing "incorrect" results to avoid crashing a program?

In my organisation, I see a lot of places where code has been put inside monitor blocks (RPG's version of try..except) to prevent raising exceptions on arithmetic errors. For instance:
Monitor;
Pxxhour = Bctime/60;
PxxMin = %Rem(Bctime:60);
On-Error;
Pxxhour = 0;
PxxMin = 0;
Pxxhour and Pxxmin are screen fields that will be displayed to users. So if there is an error in the operations, these get a value of 0. Though this prevents the program from crashing, how does it help? Users keep seeing the wrong values on the screen. Similarly, I see code which assigns the highest possible value for a given variable rather than allowing an overflow exception. Though this will prevent the program from blowing up, how does it help in the long run? Wouldn't calculations have wrong values and result in wrong business data?
The answers given below by #jmarkmurphy and #Charles successfully address the question from an RPG and IBM midrange perspective, which is what I was after.
There's two use cases for a MONITOR block...
Expected errors
Unexpected errors
For expected errors, replacing bad or invalid data with an accepted value is a valid solution in some cases. The trick is knowing which cases. The answer to that is something your business people would need to help decide. Depends of what the program is doing and what data has the problem.
For instance, given some sort of internal sales report, you might have something like so:
dcl-c DIVIDE_BY_ZERO const(00102);
dcl-c RESULT_TO_LARGE const(00103);
monitor;
averageSale = totalSalesAmount / numberSales;
on-error DIVIDE_BY_ZERO;
averageSale = 0;
on-error RESULT_TO_LARGE;
averageSale = *HIVAL;
endmon;
What's important about the above is that I'm expecting one of two possible errors and I've decided to handle them a certain way. The business people don't care that technically averageSale is undefined when numberSales is *ZERO. They'y just want a zero to appear on the report. They also understand that there's only so much room on the page and that if the number is all nines, the actual value might be bigger.
And unexpected error, such as a decimal data error, would not be caught be this MONITOR block.
For an unexpected caught by a monitor block via a ON-ERROR with *ALL or no error code specified, I'd expect to see some sort of logging of the issue followed by either skipping the problem record or cleaning shutting down depending on what the program is doing in the first place.
It appears that your code is expecting certain error(s), but without explicitly defining which error(s) codes it's willing to handle. This is lazy and not a good practice.
As far as your questions about rather or not the handling of those expected errors is valid...only you and your users can decide that
You might want to take a look at Chapter 7 - Exception and error handling of the IBM Redbook Who Knew You Could Do That with RPG IV? Modern RPG for the Modern Programmer
What Should I Do When I Have Errors in my Calculations
Programs that blow up on users are bad, even if it is the user's fault. It makes the user believe that the program is buggy, and then anything unexpected that happens becomes the program's fault; something to be fixed. Things can get really out of hand in this manner causing help desk calls for ordinary occurrences that just appear a little odd, even when the outcome is actually correct.
One option is to validate the user input to prevent calculation errors, but what do you do when you can't really prevent all of them. In our world, one of these situations is in invoicing. 5250 screens have limited real estate and you can't always make the fields big enough to hold all eventualities. So there are tradeoffs. Maybe you need to be able to sell thousands of some small items on a single invoice, but the largest total invoice you have ever had is $100K. So you size your fields like this:
dcl-s quantity Packed(5:0);
dcl-s unitPrice Packed(7:2);
dcl-s ammount Packed(9:2);
All are odd because they take up the same space on disk as the next lower even precision. You don't sell fractional quantities, and the maximum value in each field is:
quantity = 99,999;
unitPrice = $99,999.99;
amount = $9,999,999.99;
Now you can see that these maximums should easily handle all valid invoices, but it also leaves plenty of potential for calculation errors. If the user keys in maximum numbers for quantity and unitPrice, the resulting number would require a Packed(12:2) field. That would cause an overflow. In an invoice when the unit price is stored in the invoice detail, we can add an edit when the quantity and unit price is entered that checks for an extended amount overflow, and send an appropriate error message. But what if unit prices are not stored in the invoice detail, but instead in a pricing table. Now there is not a good way, if a price is changed for example to ensure that none of the existing invoices will be affected adversely.
So what do you do about a decimal overflow, or any other calculation error, be it a data problem, or something else? And what happens if the error occurs Blowing up the program is not a good option. Another option, the one that seems to be taken in the question is to apply some default value that the users will quickly recognize is out of the ordinary. It will appear in reports, and on screens. When the users see those excessively large, or small numbers, then they can know to go back and check the data.

tableUnavailable dependent upon size of search

I'm experiencing something rather strange with some queries that I'm performing in BigQuery.
Firstly, I'm using an externally backed table (csv.gz) with about 35 columns. The total data in the location is around 5Gb, with an average file size of 350mb. The reason I'm doing this, is that I continually add data and remove to the table on a rolling basis - to give me a view of the last 7 days of our activity.
When querying, if perform something simple like:
select * from X limit 10
everything works fine. It continues to work fine if you increase the limit up to 1 million rows. As soon as you up the limit to ten million:
select * from X limit 10000000
I end up with a tableUnavailable error "Something went wrong with the table you queried. Contact the table owner for assistance. (error code: tableUnavailable)"
Now according to to any literature on this, this usually results from using some externally owned table (I'm not). I can't find any other enlightening information for this error code.
Basically, If I do anything slightly complex on the data, I get the same result. There's a column called event that has maybe a couple hundred of different values in the entire dataset. If I perform the following:
select eventType, count(1) from X group by eventType
I get the same error.
I'm getting the feeling that this might be related to limits on external tables? Can anybody clarify or shed any light on this?
Thanks in advance!
Doug

Variable Calculation Strings with Variable Operators

I am currently working to integrate a third party mapping tool into my current system.
Problem is the tool itself as it replaces an existing system needs certain tweaks, as well as a summarized version of data to make SSRS reporting much faster.
Right now all I would like to do from a dataset perspective is return something similar to Sum(Numerator1) & First(Operator1) & Sum(Numerator2) & First(Operator2) & Sum(Numerator3) & First(Operator3) -- If Needed for another Numerator
The problem I have is my calculation can in theory be anything, so even storing it like this will be a huge pain.
so I'm passing balances into each one of those fields, Numerators being numbers and operators being (+,-,*,/). The reason I see this being my only option is I need Numerator's to be able to fluctuate between groups so if I'm grouping 5 rows vs 10 rows or a full total together I am still doing the same calculation my balances are just changing.
Problem is how can I make SSRS evaluate whatever I have to pass in here, and is it possible to do this as a string.
Division is the kicker here and the main reason I have to do this in the report as I might have data for 20 units. I need to provide the initial calculation for each unit as well as provide the calculation with each of the balances summed for all 20 units to figure say a percent of sales or something.
If I do this in the report I would have to have a total for each unit and then for the overall total. I don't want to do this because the report will have untold amount of additional sub totals and trying to bring it the final balance back in the query just will not work.
I appreciate any help or ideas anyone has for this.
Thank you,
Striker~
You can't evaluate a string as an expression in SSRS.
If you have the time and the know-how, then you could write a function in VB.net that parses the expression and returns the result.
You would then call that function from your report like so:
=Code.ParseString(Fields!MyStringExpression.Value)
Without telling us why your calculation could be anything, we can't provide much more information!