I would love your technical advice: In the simplified example attached (https://cloudstor.aarnet.edu.au/plus/s/qhfDnUAdnYw6wxs - EDU link), I wish to reset the values in the smooth and delay functions to their initial values every 52 weeks (430 and 50 respectively). Indeed, in my more complex model, I wish to reset all the model’s values and parameters on a regular basis.
Would you mind helping me with that? Thanks a lot!
I used COUNTER to track when to reset to the initial value. See below:
Converter 3: SMTH3(Stock_3, 10, 430)
smthstock3: IF Converter_2 > 0 THEN SMTH3(Stock_3, 10, 430) ELSE INIT(Converter_3)
Converter 2: COUNTER(0, 52)
Related
I am saving the best solution into the DB, and we display that on the web page. I am looking for some solution where a user can add more visits, but that should not change already published trips.
I have checked the documentation and found ProblemFactChange can be used, but only when the solver is already running.
In my case solver is already terminated and the solution is also published. Now I want to add more visits to the vehicle without modifying the existing visits of the Vehicle. Is this possible with Optaplanner? if yes any example of documentation would be very helpful.
You can use PlanningPin annotation for avoiding unwanted changes.
Optaplanner - Pinned planning entities
If you're not looking for pinning (see Ismail's excellent answer), take a look at the OptaPlanner School Timetabling example, which allows adding lessons between solver runs. The lessons simply get stored in the database and then get loaded when the solver starts.
The difficulty with VRP is the chained model complexity (we're working on an alternative): If you add a visit X between A and B, then make sure that afterwards A.next = X, B.previous = X, X.previous = A, X.next = B and X.vehicle = A.vehicle. Not the mention the arrival times etc.
My suggestion would be to resolve what is left after the changes have been introduced. Let's say you are you visited half of your destinations (A -> B -> C) but not yet (C - > D -> E) when two new possible destinations (D' and E') are introduced. Would not this be the same thing as you are starting in C and trying plan for D, D', E and E'? The solution needs to be updated on the progress though so the remainder + changes can be input to the next solution.
Just my two cent.
I am working on a project using the Sentinel 1 GRD product in Google Earth Engine and I have found a couple examples of missing data, apparently in swath overlaps in the descending orbit. This is not the issue discussed here and explained on the GEE developers forum. It is a much larger gap and does not appear to be the product of the terrain correction as explained for that other issue.
This gap seems to persist regardless of year changes in the date range or polarization. The gap is resolved by changing the orbit filter param from 'DESCENDING' to 'ASCENDING', presumably because of the different swaths or by increasing the date range. I get that increasing the date range increases revisits and thus coverage but is this then just a byproduct of the orbital geometry? ie it takes more than the standard temporal repeat to image that area? I am just trying to understand where this data gap is coming from.
Code example:
var geometry = ee.Geometry.Polygon(
[[[-123.79472413785096, 46.20720039434629],
[-123.79472413785096, 42.40398120362418],
[-117.19194093472596, 42.40398120362418],
[-117.19194093472596, 46.20720039434629]]], null, false)
var filtered = ee.ImageCollection('COPERNICUS/S1_GRD').filterDate('2019-01-01','2019-04-30')
.filterBounds(geometry)
.filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
.filter(ee.Filter.eq('instrumentMode', 'IW'))
.select(["VV","VH"])
print(filtered)
var filtered_mean = filtered.mean()
print(filtered_mean)
Map.addLayer(filtered_mean.select('VH'),{min:-25,max:1},'filtered')
You can view an example here: https://code.earthengine.google.com/26556660c352fb25b98ac80667298959
My friends,
In the past couple of years I read a lot about AI with JS and some libraries like TensorFlow. I have great interest in the subject but never used it on a serious project. However, after struggling a lot with linear regression to solve an optimization problem I have, I think that finally I will get much better results, with greater performance, using AI. I work for 12 years with web development and lots of server side, but never worked with any AI library, so please, have a little patience with me if I say something stupid!
My problem is this: every user that visits our platform (website) we save the Hour, Day of the week, if the device requesting the page was a smartphone or computer... and such of the FIRST access the user made. If the user keeps visiting other pages, we dont care, we only save the data of the FIRST visit. And if the user anytime does something that we consider a conversion, we assign that conversion to the record of the first access that user made. So we have almost 3 millions of lines like this:
SESSION HOUR DAY_WEEK DEVICE CONVERSION
9847 7 MONDAY SMARTPHONE NO
2233 13 TUESDAY COMPUTER YES
5543 19 SUNDAY COMPUTER YES
3721 8 FRIDAY SMARTPHONE NO
1849 12 SUNDAY COMPUTER NO
6382 0 MONDAY SMARTPHONE YES
What I would like to do is this: next time a user visits our platform, we wanna know the probability of that user making a conversion. If a user access now, our website, depending on their device, day of week, hour... we wanna know the probability of that user making a future conversion. With that, we can show very specific messages to the user while he is using our platform and a different price model according to that probability.
CURRENTLY we are using a liner regression, and it predicts if the user will make a conversion with an accuracy of around 30%. It's pretty low but so far, it's the best we got it, and this linear regression generates almost 18% increase in conversions when we use it to show specific messages/prices to that specific user compaired to when we dont use it. SO, with a 30% accuracy our linear regression already provides 18% better conversions (and with that, higher revenues and so on).
In case you are curious, our linear regression model works like this: we generate a linear equation to every first user access on our system with variables that our system tries to find in order to minimize error sqr(expected value - value). Using the data above, our model would generate these equations below (SUNDAY = 0, MONDAY = 1...COMPUTER = 0, SMARTPHONE = 1... CONVERSION YES = 1 and NO = 0)
A*7 + B*1 + C*1 = 0
A*13 + B*2 + C*0 = 1
A*19 + B*0 + C*0 = 1
A*8 + B*6 + C*1 = 0
A*12 + B*0 + C*1 = 0
A*0 + B*1 + C*0 = 1
So, our system find the best A, B and C that generates the minimizes error. How can we do that with AI? If possible, it would be nice if we could use TensorFlow or anything with JS! I know there are several AI models, and I have no idea which one would best fit what we need!
I need to get the available options for a certain question in Watson conversation api?
For example I have a conversation app and in some cases Y need to give the users a list to select an option from it.
So I am searching for a way to get the available reply options for a certain question.
I can't answer to the NPM part, but you can get a list of the top 10 possible answers by setting alternate_intents to true. For example.
{
"context":{
"conversation_id":"cbbea7b5-6971-4437-99e0-a82927607079",
"system":{
"dialog_stack":["root"
],
"dialog_turn_counter":1,
"dialog_request_counter":1
}
},
"alternate_intents":true,
"input":{
"text":"Is it hot outside?"
}
}
This will return at most the top ten answers. If there is a limited number of intents it will only show them.
Part of your JSON response will have something like this:
"intents":[{
"intent":"temperature",
"confidence":0.9822100598134365
},
{
"intent":"conditions",
"confidence":0.017789940186563623
}
This won't get you the output text though from the node. So you will need to have your answer store elsewhere to cross reference.
Also be aware that just because it is in the list, doesn't mean it's a valid answer to give the end user. The confidence level needs to be taken into account.
The confidence level also does not work like a normal confidence. You need to determine your upper and lower bounds. I detail this briefly here.
Unlike earlier versions of WEA, the confidence is relative to the
number of intents you have. So the quickest way to find the lowest
confidence is to send a really ambiguous word.
These are the results I get for determining temperature or conditions.
treehouse = conditions / 0.5940327076534431
goldfish = conditions / 0.5940327076534431
music = conditions / 0.5940327076534431
See a pattern?🙂 So the low confidence level I will set at 0.6. Next
is to determine the higher confidence range. You can do this by mixing
intents within the same question text. It may take a few goes to get a
reasonable result.
These are results from trying this (C = Conditions, T = Temperature).
hot rain = T/0.7710267712183176, C/0.22897322878168241
windy desert = C/0.8597747113239446, T/0.14022528867605547
ice wind = C/0.5940327076534431, T/0.405967292346557
I purposely left out high confidence ones. In this I am going to go
with 0.8 as the high confidence level.
I know other people have asked similar questions in past but I am still stuck on how to solve the problem and was hoping someone could offer some help. Using PsychoPy, I would like to present different images, specifically 16 emotional trials, 16 neutral trials and 16 face trials. I would like to pseudo randomize the loop such that there would not be more than 2 consecutive emotional trials. I created the experiment in Builder but compiled a script after reading through previous posts on pseudo randomization.
I have read the previous posts that suggest creating randomized excel files and using those, but considering how many trials I have, I think that would be too many and was hoping for some help with coding. I have tried to implement and tweak some of the code that has been posted for my experiment, but to no avail.
Does anyone have any advice for my situation?
Thank you,
Rae
Here's an approach that will always converge very quickly, given that you have 16 of each type and only reject runs of more than two emotion trials. #brittUWaterloo's suggestion to generate trials offline is very good--this what I do myself typically. (I like to have a small number of random orders, do them forward for some subjects and backwards for others, and prescreen them to make sure there are no weird or unintended juxtapositions.) But the algorithm below is certainly safe enough to do within an experiment if you prefer.
This first example assumes that you can represent a given trial using a string, such as 'e' for an emotion trial, 'n' neutral, 'f' face. This would work with 'emo', 'neut', 'face' as well, not just single letters, just change eee to emoemoemo in the code:
import random
trials = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while 'eee' in ''.join(trials):
random.shuffle(trials)
print trials
Here's a more general way of doing it, where the trial codes are not restricted to be strings (although they are strings here for illustration):
import random
def run_of_3(trials, obj):
# detect if there's a run of at least 3 objects 'obj'
for i in range(2, len(trials)):
if trials[i-2: i+1] == [obj] * 3:
return True
return False
tr = ['e'] * 16 + ['n'] * 16 + ['f'] * 16
while run_of_3(tr, 'e'):
random.shuffle(tr)
print tr
Edit: To create a PsychoPy-style conditions file from the trial list, just write the values into a file like this:
with open('emo_neu_face.csv', 'wb') as f:
f.write('stim\n') # this is a 'header' row
f.write('\n'.join(tr)) # these are the values
Then you can use that as a conditions file in a Builder loop in the regular way. You could also open this in Excel, and so on.
This is not quite right, but hopefully will give you some ideas. I think you could occassionally get caught in an infinite cycle in the elif statement if the last three items ended up the same, but you could add some sort of a counter there. In any case this shows a strategy you could adapt. Rather than put this in the experimental code, I would generate the trial sequence separately at the command line, and then save a successful output as a list in the experimental code to show to all participants, and know things wouldn't crash during an actual run.
import random as r
#making some dummy data
abc = ['f']*10 + ['e']*10 + ['d']*10
def f (l1,l2):
#just looking at the output to see how it works; can delete
print "l1 = " + str(l1)
print l2
if not l2:
#checks if second list is empty, if so, we are done
out = list(l1)
elif (l1[-1] == l1[-2] and l1[-1] == l2[0]):
#shuffling changes list in place, have to copy it to use it
r.shuffle(l2)
t = list(l2)
f (l1,t)
else:
print "i am here"
l1.append(l2.pop(0))
f(l1,l2)
return l1
You would then run it with something like newlist = f(abc[0:2],abc[2:-1])