Gurobi Python - Add certain vars - gurobi

I'm new to Gurobi and I'd like to know how to add certain variables to a model. For example, if I have an uncomplete graph, I'd like to add a variable x[i,j] for each arc of the graph. I don't want to add all the arcs of the complete graph, because it has a huge number of nodes and my computer runs out of memory. So I'm trying to avoid defining variables with lists for it's indexes.
Thanks in advance,

I think that you know arcs that you want to add. If it is that, you can define a set (list ou dict) of arcs like Arcs={(i,j): value of arc (i,j)}. This set concern only arcs that you want to add.

This is illustrated in the netflow.py example, which can be found in the examples\python subdirectory.

Related

How to access net displacements in pyiron

Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."

Using collections to create random buildings with Blender

I had the idea of creating a fantasy city, and to avoid having the same house over and over, but not have to manually create hundreds of houses I was thinking on creating collections like "windows", "doors", "roofs", etc, and then create objects with vertex's assigned to specific groups with the same names ("windows" vertex groups, "doors" vertex groups, etc), and then have blender pick for each instance of a house a random window for each of the vertex in the group, same for doors, roofs, etc.
Is there a way of doing this? (couldn't find anything online), or do I need to create a custom addon? If so, any good reference or starting point where something close to this is done?
I've thought of particle systems, or child objects, but couldn't find a way to attach to the vertex a random part of the collection. Also thought of booleans, but it doesn't have an option to attach to specific vertex, nor to use collections. So I'm out of ideas of how to approach this.
What I have in mind:
Create basic shape, and assign vertex to the "windows" vertex group:
https://i.imgur.com/DAkgDR3.png
And then have random objects within the "Windows" collection attached to those vertex, as either a particle or modifier:
https://i.imgur.com/rl5BDQL.png
Thanks for any help :)
Ok, I've found a way of doing this.
I'm using 3 particle systems (doors, roofs and windows), each using vertex as emitters, and using vector groups to define where to display one of each the different options.
To avoid the particle emitter to put more than one object per vertex, I created a small script that counts the number of vertex of each vertex group and updates each of the particle system Emission number accordingly.
import bpy
o = bpy.data.objects["buildings"]
groups = ["windows", "doors", "roofs"]
for group in groups:
vid = o.vertex_groups.find(group)
vectors = [ v for v in o.data.vertices if vid in [ vg.group for vg in v.groups ] ]
bpy.data.particles[group].count = len(vectors)
I've used someone's code from stack overflow for counting the number of vectors in a vector group, but can't find again the link to the specific question, so if you see your code here, please do comment and I'll update my answer with the proper credit.

Does ordering of mesh element change from run to run for constrained triangulation under CGAL?

I iterate over finie_vertieces, finite_edges and finite_faces after generating constrained delauny triangulation with Loyd optimization. I am on VS2012 using CGAL 4.12 under release mode. I see for a given case finite_verices list is repeatable (so is the vertex list under finite_faces), however, the ordering of the edges in finite_edges seems to change from run to run
for(auto eit = cdtp.finite_edges_begin(); eit != cdtp.finite_edges_end(); ++eit)
{
const auto isConstrainedEdge = cdtp.is_constrained(*eit);
auto & cFace = *(eit->first);
auto cwVert = cFace.vertex(cFace.cw(eit->second));
auto ccwVert = cFace.vertex(cFace.ccw(eit->second));
I use the above code snippet to extract vertex list, and vertex list with a given edge changes from run to run.
Any help is appreciated resolving this, as I am looking for consistent behavior in the code. My triangulation involves many line constraints on a two dimensional domain.
I was told it's likely dependable behaviour, but there is no guarantee of order. IIRC the documentation says the traversal order is not guaranteed. I think it's best to assume the iterators' transversal is not deterministic and could change.
You could use any of the _info extensions to embed information into the face, edge, etc (a hash perhaps?) which you could then check against to detect a change.
In my use case, I wanted to traverse the mesh in parallel and OpenMP didn't support the iterators. So I hold a vector of the Face_handles in memory which I can then easily index over. In conjunction with the _info data, you could use this to build a vector of edges,faces, etc with a guaranteed order using unique information in the ->info() field.
Another _info example.

Is there a way to create a Traits class to parametrise Envelope_diagram_2 where the X monotone curves can be segments, rays or conic curves?

I am using the Envelope_3 package of CGAL-4.9.1 and I need to compute an upper envelope where the resulting envelope diagram (Envelope_diagram_2<EnvTraits>) could have edges of three different types:
segments
rays
parabolic arcs (conic arcs)
The three provided models of Envelope_Traits_3 are not enough for this.
I therefore need to create my own EnvTraits (which have to be a model of the concept Envelope_Traits_3).
For now, I made a something like the already provided Env_sphere_traits_3<ConicTraits> model, with which I have at my disposal both parabolic arcs and segments (I just use straight arcs).
The problem arises because I also need to be able to use Rays. How could I do this? Is there a Traits class that I can extend (just like I'm doing right now with Arr_conic_traits_2) that provides X_monotone_curve_2s that can be of the three types that I need?
I found the Arr_polycurve_traits_2 class, hoping that it would allow curves of different type to be stored as subcurves, but it actually just allows to store polycurves that are all of the same kind (linear, bezier, conic, circular...).
What you need is a model of the EnvelopeTraits_3 concept and of the ArrangementOpenBoundaryTraits_2 concept. Among all traits classes provided by the "2D Arrangements" package only instances of the templates Arr_linear_traits_2, Arr_rational_function_traits_2, and Arr_algebraic_segment_traits_2 are models of the later concept.
I suggest that you develop something like Env_your_object_traits_3<AlgebraicTraits_2>, where the template parameter AlgebraicTraits_2 can be substituted with an instance of Arr_algebraic_segment_traits_2.
Efi

How to add a new syntax element in HM (HEVC test Model)

I've been working on the HM reference software for a while, to improve something in the intra prediction part. Now a new intra prediction algorithm is added to the code and I let the encoder choose between my algorithm and the default algorithm of HM (according to the RDCost of course).
What I need now, is to signal a flag for each PU, so that the decoder will be able to perform the same algorithm as the encoder decides in the rate distortion loop.
I want to know what exactly should I do to properly add this one bit flag to the stream, without breaking anything in the code.
Assuming that I want to use a CABAC context model to keep the track of my flag's statistics, what else should I do:
adding a new context model like ContextModel3DBuffer m_cCUIntraAlgorithmSCModel to the TEncSbac.h file.
properly initializing the model (both at encoder and decoder side) by looking at how the HM initialezes other context models.
calling the function m_pcBinIf->encodeBin(myFlag, cCUIntraAlgorithmSCModel) and m_pcTDecBinIfdecodeBin(myFlag, cCUIntraAlgorithmSCModel) at the encoder side and decoder side, respectively.
I take these three steps but apparently it breaks something.
PS: Even an equiprobable signaling (i.e. without using CABAC contexts) will be useful. I just want to send this flag peacefully!
Thanks in advance.
I could solve this problem finally. It was a bug in the CABAC context initialization.
But I want to share this experience as many people may want to do the same thing.
The three steps that I explained are essentially necessary to add a new syntax element, but one might be very careful with the followings:
In the beginning, you need to decide either you want to use a separate context model for your syntax element? Or you want to use an existing one? In case of CABAC separation, you should define a ContextModel3DBuffer and the best way to do that is: finding a similar syntax element in the code; then duplicating its ``ContextModel3DBuffer'' definition and ALL of its occurences in the code. This way assures that you are considering everything.
Encoding of each syntax elements happens in two different places: first, in the RDO loop to make a "decision", and second, during the actual encoding phase and when the decisions are being encoded (e.g. encodeCtu function).
The order of encoding/decoding syntaxt elements should be the same at the encoder/decoder sides. For example if your new syntax element is encoded after splitFlag and before predMode at the encoder side, you should decode it exactly between splitFlag and predMode at the decoder side.
The context model is implemented as a 3D matrix in order to let track the statistics of syntaxt elements separately for different block sizes, componenets etc. This means that when you want to call the function encodeBin, you may make sure that a correct index is being used. I've made stupid mistakes in this part!
Apart from the above remarks, I found a the function getState very useful for debugging. This function returns the state of your CABAC context model in an arbitrary place of the code when you have access to it. It is very useful to compare the state at the same place of the encoder and the decoder when you have a mismatch. For example, it happens a lot that you encode a 1 but you decode a 0. In this case, you need to check the state of your CABAC context before encoding and decoding. They should be the same. If they are not the same, track back the error to find the first place of mismatch.
I hope it was helpful.