I am writing an economic project in anylogic. I want to sum all the money that flows between two stocks, in fact I need to sum all the values that a flow get during simulation, till a specific condition, how can I do that?
Thanks
Well,
you need to be careful to understand how system dynamics works: it is a continuous process!
If you want to track your flows, the easiest option is to use a dataset object which tracks the flow at specific points in time.
Below, a flow "AdoptionRate" is tracked by dataset "AdoptersDS" every 0.1 minutes:
However, be aware that this tracks the flow at specific points in time. You can set up similar datasets for your stocks as well.
Alternatively, you could write a recurring event which stores the values at specific points in time into your build-in database.
Related
this is more of a general discussion rather than a code question.
I have a website monitoring platform whereby users of the system can input their website URL and we'll check it every X minutes based on the customer's interval, at each interval, an entry is stored as a UptimeCheck model in the Laravel 8 project with the status being down or up.
If a customer has 20 monitors, and each checks every minute, then over a 30 day period for the one customer they'd accumulate over 1 million rows.
My query, is really do I need to keep this number of rows?
The reason this number of rows is kept is so that we can present a graph showing the average website uptime.
My thinking is that if I created some kind of SVG programatically for each day and store this in the table then I wouldn't need to store as many entries, but my concern here is how would I merge SVG models into one to present a daily graph?
What kind of libraries could I use and how else might I approach this?
Unlike performance, the trick for storing uptime data is simple. You don't store it. ;)
You need to store DOWNTIME data instead. Register only unavailability events and extrapolate uptime when displaying reports.
I'm looking into using Azure Data Factory V2 for integration imports and want to know if there's a way to track the cost of individual pipelines being run?
For example if I had 3 pipelines which represented 3 different integrations would there be a way to see the cost incurred from each?
Is there also a way to do this in near real time, so that during a month I could somehow put a budget on each integration/pipeline?
The Azure Data Factory bills only show the total cost. We can't get the each pipeline cost in Data Factory, we must manually calculate the price.
We can see the pipeline level consumption: Monitor-->Pipeline run-->Consumption:
Azure document says that "The pipeline run consumption view shows you the amount consumed for each ADF meter for the specific pipeline run, but it does not show the actual price charged". We need manually calculate the cost by Pricing calculator.
For your questions, is there a way to see the cost incurred from each?
No, there isn't. Must manually calculate the cost.
Is there also a way to do this in near real time, so that during a month I could somehow put a budget on each integration/pipeline?
I'm afraid no.
Others have post almost same question, please ref here: Azure Data Factory Pipeline Consumption Details
I've been tasked to create a data visualisation dashboard that relies on me drilling into the existing database.
One report is 'revenue per available covers' - part of the calculation determining how many hours were booked against how many hours were available.
The problem is the 'hours available', currently this is stored in a schedule table that has a 1-1 link with the venue - and if admin want to update this there is a simple CRUD panel with the pre-linked field ready to complete.
I have realised that if I rely on this table at any point in the future when the schedule changes the calculations change for any historic data.
Any ideas on how to keep a 'historic' schedule with as minimum impact as possible to the database?
What you have is a (potentially) slowly-changing dimension. Basically, there are two approaches:
For each transactional record, include the hours that you are interested in.
Store the schedule with time frames, which capture the schedule at a particular point in time.
In SQL Server, I would normally go for the second option, using effDate and endDate columns to capture the period when the schedule is active.
I would suggest that you start with a deeper explanation of the concept, such as the Wikipedia page.
Goal:
Display the result based on the picture below in reporting Service 2008 R2.
Problem:
How should I do it?
You also have to remember that in reality the list contains lots of data, maybe miljon
In terms of the report itself, this should be a fairly standard implementation.
You'll need to create one Tablix, with one Group for Customer (one row), one Group for Artist (two rows, one for the headers and one for the Artist name, then a detail row for the Title.
It looks like you need more formatting options for the Customers Textbox - you could merge the cells in the Customer header row, then insert a Rectangle, which will give you more options to move objects around in the row.
For large reports you've got a few options:
Processing large reports: http://msdn.microsoft.com/en-us/library/ms159638(v=sql.105).aspx
Report Snapshots: http://msdn.microsoft.com/en-us/library/ms156325(v=sql.105).aspx
Report Caching: http://msdn.microsoft.com/en-us/library/ms155927(v=sql.105).aspx
I would recommend scheduling a Snapshot overnight to offload the processing to a quiet time, then making sure the report has sensible pagination set up so not too much data has to be handled at one time when viewed (i.e. not trying to view thousands of reports at one time when viewed in Report Manager).
Another option would be to set up an overnight Subscription that could save the report to a fileshare or send it as an email.
Basically you're thinking about reducing the amount of processing that needs to be done at peak times and processing the report once for future use to reduce overall resource usage.
I would use a List with text-boxes inside to for that kind of display.
In addition you may consider to add page break after each customer.
Personally I Experienced Lots of performance issues when dealing with thousands of rows, not to mention millions.
My advise to you is to re-consider the report main target: if the report is for exporting purposes - then don't use the ssrs for that.
If the report is for viewing - then perhaps it is possible to narrow down the data using parameters per user's choice.
Last thing, I wish you Good luck :)
I'm a newbie to SSAS. I have a database which has an agreement table in which the status of the agreements changes over time. This is stored in the agreement log. the status can be any combination over an extended period of time. One set of questions I will need to answet are how many agreements are of a given status and also to show trends in the status over time. I'm reading Kimball and periodic snapshot seems to be the best fit but I'm at a loss how to design the fact table. Do I preaggregate the data into periods broken down by status? And then how do I manipulate it in SSAS and how do aggregations work as it's more like a bank balance. I sort of get some of the concepts but I'm still pretty confused.
Agreed, this is a good case for Periodic Snapshot.
In this case, you need a status dimension,
and a fact with a period indicator. Your reports will also need to filter on the period.
ETL is a bit more complicated, as during the current period, you clear down and reload the current period data. Previous periods to the current one are fixed. Obviously, you lose visibility on statuses that change multiple times within a period, so the period should be chosen based on how quickly the data changes as well as how often its reported. This is also why Periodic Snapshots are often used in conjunction with transaction fact tables