In “Anylogic” I created a production chain of a furniture company for five product types. The chain was built of “delay-”, “service-” with “RessourcePool-Blocks” to simulate the production time of the product and the availability of the worker on the machining station.
The goal was to adjust the load profile with the energy production profile of a photovoltaic system. For this purpose, the production time should be choose individual. The production profile become loaded in the simulation from an excel table. Every day the company produce 20 furniture of different types.
The idea to adjust the production time and the kind of furniture (every furniture type have a different production time) was to use “parameter” to determine the start point to begin the production and the selection of product type. I need for each product two parameter.
In an “optimization” “experiment” in “Anylogic” the simulation for one day runs through. The “Objective” is to minimize the amount of energy draw from the grid. After 500 iterations and some adjustments of the value range, I get adapted parameter as result. The resulted value for the parameter in the “optimization” can transfer to the production chain simulation.
My question is:
Is it possible to transfer the resulted parameters automatic to the production chain program, so that I do not need always to run both simulations separately?
I want to simulate for a whole year, to get specific result for different production profile over the year and not start the optimization program and the simulation individual for every day.
Related
Problem
we have ~50k scheduled financial reports that we periodically deliver to clients via email
reports have their own delivery frequency (date&time format - as configured by clients)
weekly
daily
hourly
weekdays only
etc.
Current architecture
we have a table called report_metadata that holds report information
report_id
report_name
report_type
report_details
next_run_time
last_run_time
etc...
every week, all 6 instances of our scheduler service poll the report_metadata database, extract metadata for all reports that are to be delivered in the following week, and puts them in a timed-queue in-memory.
Only in the master/leader instance (which is one of the 6 instances):
data in the timed-queue is popped at the appropriate time
processed
a few API calls are made to get a fully-complete and current/up-to-date report
and the report is emailed to clients
the other 5 instances do nothing - they simply exist for redundancy
Proposed architecture
Numbers:
db can handle up to 1000 concurrent connections - which is good enough
total existing report number (~50k) is unlikely to get much larger in the near/distant future
Solution:
instead of polling the report_metadata db every week and storing data in a timed-queue in-memory, all 6 instances will poll the report_metadata db every 60 seconds (with a 10 s offset for each instance)
on average the scheduler will attempt to pick up work every 10 seconds
data for any single report whose next_run_time is in the past is extracted, the table row is locked, and the report is processed/delivered to clients by that specific instance
after the report is successfully processed, table row is unlocked and the next_run_time, last_run_time, etc for the report is updated
In general, the database serves as the master, individual instances of the process can work independently and the database ensures they do not overlap.
It would help if you could let me know if the proposed architecture is:
a good/correct solution
which table columns can/should be indexed
any other considerations
I have worked on a differt kind of sceduler for a program that reported analyses on a specific moment of the month/week and what I did was combining the reports to so called business cycle based time moments. these moments are on the "start of a new week", "start of the month", "start/end of a D/W/M/Q/Y'. So I standardised the moments of sending the reports and added the id's to a table that would carry the details of the report. - now you add thinks to the cycle of you remove it when needed, you could do this by adding a tag like(EOD(end of day)/EOM (End of month) SOW (Start of week) ect, ect, ect,).
So you could index the moments of when the clients want to receive the reports and build on that track. Hope that this comment can help you with your challenge.
It seems good to simply query that metadata table by all 6 instances to check which is the next report to process as you are suggesting.
It seems odd though to have a staggered approach with a check once every 60 seconds offset by 10 seconds for your servers. You have 6 servers now but that may change. Also I don't understand the "locking" you are suggesting, why now simply set a flag on the row such as [State] = "processing", then the next scheduler knows to skip that row and move on to the next available one. Once a run is processed, you can simply update a [Date_last_processed] column, or maybe something like [last_cycle_complete] = 'YES'.
Alternatively you could have one server-process to go through the table, and for each available row, sends it off to one of the instances, in a round-robin fashion (or keep track of who is busy and who isn't).
I have a list of Data that contains a list of machines with the following columns (good parts, bad parts, production date, end date, time/1pc = means time spent for one piece to make) these informations are used to calculate the 3 performance indicators in order to get the OEE for each machine.
I've created a user form that allows the user to enter the type of machine and the production date and by hiting ''OK'' the 3 indicators (availability, quality, performance) are calculated via a program in VBA excel. My Main objective is to create a dashboard that shows all these indicators plus the OEE for the type of machine selected by the user in a specific month I have created a worksheet for each machine containing in each column (the Month of the year the specific date, quality indicator, availability and performance and OEE) and by side there is a button that shows us a userform to select the week desired to calculate these indicators and there it goes my issue:
I'm trying to connect the userform before that calculate the indicators with this one that allows us to chose the week to fill the table for each machine with their indicators. How can I do that?
Hi and happy new year to all Optaplanner users,
we have a requirement to plan tours. These tours contain chained and time-windowed activities (deliveries) executed by a weekly changing number of trucks.
The start time of a single tour can vary and is dependent on several conditions (i.e. the goods to be delivered must be produced, before the tour can start; only a limited number of trucks can be served at the plants gates at the same time; truck must be back before starting a new tour). Means: also the order of tours can vary and time gaps between the tours of a truck can occur.
My design plan is, to annotate TourStartTime as a second planning variable in Optaplanners VRPTW-example and to assign TourStartTime to 2-hours time grains (planning horizon is 1 week and tours normally do not start during night times; so the mentioned time grains reflect a simplified calendar for possible tour starts).
The number of available trucks (from external logistic companies) can vary from week to week. Regarding this I wanted to plan with a 'unlimited' number of trucks. But the number of trucks per logistic company, that can be actually assigned with deliveries, should be controlled by a constraint (i.e. 'trucks_to_be_used_in_parallel').
Can anybody tell me, if this is a feasable design approach, or where do I have to avoid traps (ca. 1000 deliveries/week, 40-80 trucks per day) ?
Thank you
Michael
A second planning variable is possible (and might be even the best design depending on your requirements), but it will blow up the search space and maybe even custom course grained moves might become needed to get great results.
Instead, I 'd first investigate if the Truck's TourStartTime can be made a shadow variable. For example, give all trucks a unique priority number. Then make a Truck's TourStartTime a shadow variable: the soonest time a truck can leave. If there are only 3 lanes and 4 trucks want to leave, the 3 trucks with the highest priority number leave first (so get the original TourStartTime, the 4th truck gets a later one).
I'm have a MS SQL database for storing raw data on utility usage (electric, water, and gas), which I have implemented to compile data from four disparate automated collection systems. The ultimate goal is to generate invoices from this data.
Different customers have one of a dozen different rate structures, which all may-or-may-not use a non-linear function to calculate cost per usage based on peak demand and total usage.
It is not unheard of for a particular customer to change from one rate structure to another, or for the rate calculation for a particular rate class to change from year to year, so I would like to put these formulas into new tables within my database where they can be easily referenced and modified.
Ideally, I would want to run one of these dynamic functions as part of a query without relying on the front-end having to do them, but I have no idea how that would work.
By request, an example formula of the type I am talking about:
All current customer with Electricity Rate Structure A pay $0.005/kW-hr for the first 2,000 kW-hr consumed, $0.004/kW-hr for 2,000-15,000 kW-hr, and $0.003/kW-hr for all consumption above 15,000kW-hr. Additionally, any customer who has higher than 50kW demand will be subject to a $0.002/kW-hr surcharge on all consumption. The values for these coefficients, thresholds, the number of thresholds, and whether or not the customer even gets charged for peak demand can and do change from year to year and from rate structure to rate structure.
The formula for this (if I was programming it) would be:
min(sum(usage),2000)*.005+min(min(sum(usage)-2000,0),15000)*.004 + min(sum(usage)-15000,0)*.002 + (max(usage)>50)*sum(usage)*.002
I am in a process of making OLAP cubes for data mining purposes.
The domain is Instruments which run tests and tests has status id's 1,2,3 which means ok, warning and error. I have already deployed the cube and its working perfectly.
My measure is the Sum of my tests. I have a timetable associated with the test table, for when the test was run.
I have four dimensions:
Instrument: which holds information about instrument.
Test: contains all the tests with information about the time it ran.
Status: contains the three status mentioned above.
Time: sort out tests in time
My question is, I have another status called 'NotRun'. Like the other statuses NotRun tests can not be saved in the database, but is calculated with a query.
Notrun is calculated by selecting all instruments from instrument table and then extract those instruments that are to find in test table within a given time period.
I want to use MDX to do the thing mentioned above, but instead of giving a time period i want the cube to handle that for me dynamically.
I don't want to pick a specific year instead i would like to take care of that with my time dimension dynamically.
where ([Date].[Calendar Year].&[2002])
I am really stucked. Any idea how we can acheive that in Business Studio Intelligence 2008?
All the best,
Hassan.
To answer the question "how do I get MDX to pick a date member by itself" see my answer at enter link description here
Or did you want Business Studio Intelligence to pop up a box and ask for the dates each time you run the report?