Get Gurobi IIS using Pyomo and gurobipy - gurobi

Could someone walk me through the steps to get the IIS from a Pyomo model using gurobipy?
opt = SolverFactory('gurobi',solver_io='python')
As a reference, this is what I use in JuMP
function getIIS(m::JuMP.Model)
grb_model = m.internalModel.inner
num_constrs = Gurobi.num_constrs(grb_model)
Gurobi.computeIIS(grb_model)
iis_constrs = Gurobi.get_intattrarray(grb_model, "IISConstr", 1, num_constrs)
m.linconstr[find(iis_constrs)]
end
So, basically I need access to the internal gurobi model to run the computeIIS function, and then I need a way to map the array of rows to the actual Pyomo constraints.
thanks!

You can pass this as an option to Gurobi when you call
the solve function by using options_string. Then, Gurobi's Model.write() function will write the file. In this case, you would write a .ilp file, but other file formats exist for different purposes. An example:
solver_parameters = "ResultFile=model.ilp" # write an ILP file to print the IIS
Then, you would add options_string when you call the solve function:
results = solver.solve(instance, options_string=solver_parameters)
You can also string multiple options together with the following syntax. Note the leading empty space within the quotes for the LogToConsole and ResultFile options:
solver_parameters = "TimeLimit=60" # set time limit (seconds)
solver_parameters += " LogToConsole=0" # 0 = turn off console output
solver_parameters += " ResultFile=model.ilp" # write a MIP start file to warm start
The documentation found here applies to solving a model with Gurobi, and the examples at the bottom work with any Gurobi file format:
http://www.gurobi.com/documentation/8.0/refman/solving_a_model2.html
Finally, this link explains the different file formats that Gurobi can write: http://www.gurobi.com/documentation/8.0/refman/model_file_formats.html#sec:FileFormats

See this Pyomo example using suffixes. I believe it's doing what you want.
https://github.com/Pyomo/pyomo/blob/master/examples/pyomo/suffixes/gurobi_ampl_iis.py

Related

ANSYS Mechanical Workbench Scripting - Accessing Parameters

Ansys gurus,
My project is a static structural analysis using ANSYS workbench mechanical. I have created the parametrized geometry (via Design modeler) and material property in workbench, and used ACT scripting to configure the model. However, I don't find too much information on how to access the parameters via ACT scripting.
I have confirmed that the geometric parameters are successfully created in the workbench, e.g.
ID
Paramater Name
Value
Unit
P1
diameter
50
um
The documentation LINK suggests that I can obtain parameter ID using Analysis.GetParameter(), however, the following code didn't work for me and resulted in the error as below.
Code:
STATIC_STRUCTURAL = ExtAPI.DataModel.AnalysisByName("Static Structural")
HEIGHT = STATIC_STRUCTURAL.GetParameter('height')
Error:
Property not found.
Do you have any suggestions on the cause of such error, is it because the Parameters were not imported from workbench "project schematic" to "Model", or the code I tried to retrieve the parameters was incorrect. In either cases, could you advise the correct method to access the parameters? Thank you!
hawkoli1987
If you want to access a parameter from the "project schematic" page you can create a list. If you than want to do something with this inside of mechanical, you have to send the commands to your model:
# Access the geometric parameters
allParameters = Parameters.GetAllParameters()
for parameter in allParameters:
print parameter.DisplayText
if parameter.DisplayText == 'height':
heigthParameter = parameter
# Loop over all systems in the project
for system in GetAllSystems():
# Get Model Container
model = system.GetContainer(ComponentName="Model")
# edit model component in batch mode
system.Refresh()
model.Edit(Interactive=True)
# code to be sent to ansys mechanical
cmd ='''
here goes your ACT script as string. You have to make sure, that there are no leading spaces or tabs.
'''
# send code and exit mechanical
model.SendCommand(Language='Python',Command=cmd)
model.Exit()
print "Finished script execution."

Call an app in a loop with different parameters in Cytoscape

I'm new in Cytoscape. I want to know how can I run an app (for example MCL clustering algorithm) multiple times with different parameters in Cytoscape. Is there any way to write an script to do that instead of running manually multiple times for different parameters?
Thanks!
Thanks Scooter.
I saw his answer.
Still I have problem with MCODE.
I figured it out by reading this paper "Cytoscape Automation: empowering workflow-based network analysis".
I want to put the script here in the case that maybe somebody has question.
From python you need to import
import requests, json
import numpy
REST_ENDPOINT = 'http://localhost:1234'
and then let's say we want to use affinity propagation clustering algorithm, then you can go to help->Automation->CyRest command API. Here you can find the app and all its parameters. You load the input network from cytoscape in the beginning.
counter = 0
ap_clusters = dict()
for i in numpy.arange(-1.0, 1.1, 0.1):
message_body = {
"preference": str(round(i,1))
}
response = requests.post(REST_ENDPOINT + '/v1/commands/cluster/ap', data =
json.dumps(message_body), headers = {'Content-Type': 'application/json'})
response_data = response.json()['data']
ap_clusters[counter] = response_data['clusters']
counter += 1
above is a code to call AP clustering multiple times from python.
For AP and MCL the code works for multiple parameters. However when I tried to call MCODE with different set of parameters, it stopped the connection and closed the cytoscape app. It can only run for on set of parameters.
This is the error:
" raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))"
here is the code for mcode algorithm:
counter = 0
mcode_clusters = dict()
for i in numpy.arange(3,6,1):
for j in numpy.arange(0.1,0.56,0.05): #----vertex weight percentage
for h in ["on","off"]:
for f in ["on","off"]:
if f=="on":
for p in [0,0.1,0.2]: #---fluffing percentage
message_body = {
"fluff" : f,
"fluffNodeDensityCutoff" : str(round(p,1)),
"haircut" : h,
"maxDepthFromStart" : str(i),
"nodeScoreCutoff": str(round(j,1))
}
response = requests.post(REST_ENDPOINT + '/v1/commands/cluster/mcode', data = json.dumps(message_body), headers = {'Content-Type': 'application/json'})
response_data = response.json()['data']
mcode_clusters[counter] = response_data['clusters']
counter += 1
If you have any solutions I really appreciate it if you could share it with me.
Thanks.
SaRa
I think this was answered by Ruth pretty clearly in cytoscape-helpdesk:
You can do all of the above. Whatever is easiest for you.
There is a library py2cytoscape that you can use to issue commands to cytoscape from > python. Info can be found here: https://py2cytoscape.readthedocs.io/en/latest/
for more info on automation in cytoscape check out: http://manual.cytoscape.org/en/stable/Programmatic_Access_to_Cytoscape_Features_Scripting.html
But you can also run it through automation. You can create a text file with each of > your commands (for example a list of commands like: cluster mcl attribute="correlation" network=1234") and then go to Tools --> execute batch file to > execute the whole file. I am not sure if support loops. If you want to loop through anything I would recommend using python.
Thanks,
Ruth
I'll just add that currently, looping isn't supported in batch files.
-- scooter
In regard to my problem, I have to say that:
There are two mcode in cytoscape. One is in clusterMaker and the other belongs to cytoscape. When I tried to call mcode, I used the command "'/v1/commands/cluster/mcode'" I called the one in clusterMaker however the parameters' name was based on the one in cytoscape. I changed the command to "'/v1/commands/mcode/cluster'" and the the problem is solved now.
Many thanks.
SaRa

How to write summaries for multiple runs in Tensorflow

If you look at the Tensorboard dashboard for the cifar10 demo, it shows data for multiple runs. I am having trouble finding a good example showing how to set the graph up to output data in this fashion. I am currently doing something similar to this, but it seems to be combining data from runs and whenever a new run starts I see the warning on the console:
WARNING:root:Found more than one graph event per run.Overwritting the graph with the newest event
The solution turned out to be simple (and probably a bit obvious), but I'll answer anyway. The writer is instantiated like this:
writer = tf.train.SummaryWriter(FLAGS.log_dir, sess.graph_def)
The events for the current run are written to the specified directory. Instead of having a fixed value for the logdir parameter, just set a variable that gets updated for each run and use that as the name of a sub-directory inside the log directory:
writer = tf.train.SummaryWriter('%s/%s' % (FLAGS.log_dir, run_var), sess.graph_def)
Then just specify the root log_dir location when starting tensorboard via the --logdir parameter.
As mentioned in the documentation, you can specify multiple log directories when running tensorboard. Alternatively, you can create multiple run subfolder in the log directory to visualize different plots in the same graph.

Nonetype Error on Python Google Search Script - Is this a spam prevention tactic?

Fairly new to Python so apologies if this is a simple ask. I have browsed other answered questions but can't seem to get it functioning consistently.
I found the below script which prints the top result from google for a set of defined terms. It will work the first few times that I run it but will display the following error when I have searched 20 or so terms:
Traceback (most recent call last):
File "term2url.py", line 28, in <module>
results = json['responseData']['results']
TypeError: 'NoneType' object has no attribute '__getitem__'
From what I can gather, this indicates that one of the attributes does not have a defined value (potentially a result of google blocking me?). I attempted to solve the issue by adding in the else clause though I still run into the same problem.
Any help would be greatly appreciated; I have pasted the full code below.
Thanks!
#
# This is a quick and dirty script to pull the most likely url and description
# for a list of terms. Here's how you use it:
#
# python term2url.py < {a txt file with a list of terms} > {a tab delimited file of results}
#
# You'll must install the simpljson module to use it
#
import urllib
import urllib2
import simplejson
import sys
# Read the terms we want to convert into URL from info redirected from the command line
terms = sys.stdin.readlines()
for term in terms:
# Define the query to pass to Google Search API
query = urllib.urlencode({'q' : term.rstrip("\n")})
url = "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&%s" % (query)
# Fetch the results and convert to JSON format
search_results = urllib2.urlopen(url)
json = simplejson.loads(search_results.read())
# Process the results by pulling the first record, which has the best match
results = json['responseData']['results']
for r in results[:1]:
if results is not None:
url = r['url']
desc = r['content'].encode('ascii', 'replace')
else:
url = "none"
desc = "none"
# Print the results to stdout. Use redirect to capture the output
print "%s\t%s" % (term.rstrip("\n"), url)
import time
time.sleep(1)
Here are some Python details for you first:
None is a valid object in Python, of the type NoneType:
print(type(None))
Produces:
< class 'NoneType' >
And the no attribute error you got is normal when you try to access some method or attribute of an object that doesn't have that attribute. In this case, you were attempting to use the __getitem__ syntax (object[item_index]), which NoneType objects don't support because it doesn't have the __getitem__ method.
The point of the previous explanation is that your assumption about what your error means is correct: your results object is essentially empty.
As for why you're hitting this in the first place, I believe you are running up against Google's API limits. It looks like you're using the old API that is now deprecated. The number of search results (not queries) used to be limited to around 64 per query, and there used to be no rate or per-day limit. However, since it's been deprecated for over 5 years now, there may be new undocumented limits.
I don't think it necessarily has anything to do with SPAM, but I do believe it is an undocumented limit.

Renaming an Amazon CloudWatch Alarm

I'm trying to organize a large number of CloudWatch alarms for maintainability, and the web console grays out the name field on an edit. Is there another method (preferably something scriptable) for updating the name of CloudWatch alarms? I would prefer a solution that does not require any programming beyond simple executable scripts.
Here's a script we use to do this for the time being:
import sys
import boto
def rename_alarm(alarm_name, new_alarm_name):
conn = boto.connect_cloudwatch()
def get_alarm():
alarms = conn.describe_alarms(alarm_names=[alarm_name])
if not alarms:
raise Exception("Alarm '%s' not found" % alarm_name)
return alarms[0]
alarm = get_alarm()
# work around boto comparison serialization issue
# https://github.com/boto/boto/issues/1311
alarm.comparison = alarm._cmp_map.get(alarm.comparison)
alarm.name = new_alarm_name
conn.update_alarm(alarm)
# update actually creates a new alarm because the name has changed, so
# we have to manually delete the old one
get_alarm().delete()
if __name__ == '__main__':
alarm_name, new_alarm_name = sys.argv[1:3]
rename_alarm(alarm_name, new_alarm_name)
It assumes you're either on an ec2 instance with a role that allows this, or you've got a ~/.boto file with your credentials. It's easy enough to manually add yours.
Unfortunately it looks like this is not currently possible.
I looked around for the same solution but it seems neither console nor cloudwatch API provides that feature.
Note:
But we can copy the existing alram with the same parameter and can save on new name
.