With reference to this minecraft: How to get coordinates of blocks inside selection
I was wondering if it was possible to generate a list of coordinates of all blocks present within a selection(a selection made by selecting pos1 and pos2 using world edit) of a certain type . thanks
See BlockPos#getAllInBox. It provides multiple variants which includes a version with BlockPos, BlockPos. And then you can filter through the blocks within (World#getBlockState) and compare with a blockstate instance from Blocks.SOMETHING.getDefaultState(), or by comparing Block instance by calling BlockState#getBlock().
Related
I have tile based map, where agent needs to go from one tile to another, Some tiles have (occupied pos-X-Y) meaning that agent cant step on these tiles named pos-X-Y. This part works, but I need to make these tiles occupied only in certain turns. I tried to use action-cost and add a number to each (occupied pos-X-Y) like this: (occupied pos-X-Y Z) planning to compare the Z number with the current action-cost. But I couldnt even assign the number to the occupied tile.
How do I assign a number to these occupied tiles and how do I compare it with the action-cost?
Have you tried functions Numeric Fluents ?
Your move action can increase a "turn" function.
A (forbidden_turn ?t - tile) function can be affected with an integer value, then you can use it in a precondition. But this requires your planner to support negative preconditions.
Otherwise, you can use a allowed turn function.
I figured it out (with help). Instead of using numbers I created many objects, I will call them turns, then I set that, turn 2 is always after turn 1, turn 3 is always after turn 2 etc. and I added these turns as the letter z in "(occupied pos-X-Y Z)". And when actor moved I just changed his turn to the next number based on the rule I created earlier.
Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."
Definition:
As defined here, CGGetDisplaysWithPoint takes 4 parameters:
A CGPoint object
An int32 representing the maximum number of displays returned
A mutable array passed by reference, which will be filled with the displayIDs found.
An int32 representing the matching display count
Syntax:
CGError CGGetDisplaysWithPoint(CGPoint point, uint32_t maxDisplays, CGDirectDisplayID *displays, uint32_t *matchingDisplayCount);
This is fine and I can get this function working however I am quite confused as to how I should deal with the maxDisplays parameter?
As I understand it, if I set maxDisplays to 5 then if someone has 6 displays, there is a 1/6 chance that a randomly selected pixel will find no displays?
So do we just set maxDisplays to something unrealistic, like 99, and release the array afterwards? What's the point in this argument?
The point of the argument is to prevent the function from writing past the end of your array. You have to tell it the capacity of the array. Note that the displays parameter is neither a Cocoa nor Core Foundation mutable array object. It's a C-style array. It's "mutable" in the sense that it's not "const", but it's not an object that manages its own storage. You are responsible for managing that storage and must communicate its capacity to any function that is intended to store data in it (or otherwise guarantee that such function won't overrun it).
So, your question should really be how to decide on the capacity of the array. There are two basic approaches:
1) Call the function passing NULL for the displays parameter and any arbitrary value (best to use 0) for maxDisplays. As documented, when displays is NULL, maxDisplays is ignored and the function outputs via matchingDisplayCount the number of displays whose bounds contain the given point. Then, allocate an array with (at least) that many elements to use to receive the display IDs and call the function again, passing that array for displays and its capacity for maxDisplays.
2) Use an array with capacity of 32. It's not explicitly documented but it's implicit in the API that that's the maximum number of supported displays. A display ID can be converted to an OpenGL display mask using CGDisplayIDToOpenGLDisplayMask(). The type CGOpenGLDisplayMask is used to hold OpenGL display masks. It is defined as uint32_t, a 32-bit value. Therefore, there can be at most 32 active displays.
This technique is used in some Apple docs, like here, here, here, and here. That last one even makes a direct connection between the number of bits in CGOpenGLDisplayMask and the maximum number of displays.
I am working on a vb.net auto-focus routine and have the image processing part worked out, basically I do some edge detection, convert to gray-scale and then measure the standard deviation to work out the most 'in focus' point of the image.
I have done this with a number of images, and it almost comes out as a normal distribution, now I want to start to integrate this with my microscope and a stepper motor.
The concept is that I would move through a lower and upper limit on the stepper motor, and measure the above through live-view, recording the values in a list. In my case the two things I want to record are the position, and the double standard deviation value.
I am wondering what the best way to record these are, should it be
recorded as a typed list, or a dictionary or another method?
Once I record all of these values, I would want to go through the values to conduct some simple analysis of them, so if that was the case
how would I then be able to determine the average, min, max etc?
My first attempt of storing the information was in a typed list, where I had essentially done the below;
Public ZPositions As New List(Of Zfocus)
Public Class Zfocus
Public Position As Integer
Public GreyStDev As Double
End Class
The second way was to use a dictionary;
Public ZPosition As New Dictionary(Of Integer, Double)
However in both cases, I am not sure how I can either pull out a single maximum position value (e.g. Position integer,) or from the dictionary the position value (integer) which (sort of) corrosponds to the best auto-focus position.
The Third added bonus, is to be able to pull out any postions above a
specific value, which may corrospond to having some focus information
within them for focus stacking?
Many thanks
Big thanks to jmcilhinney, this solved my issue and works a treat!
Went with a strongly typed list (the ZFocus list) and then I could do the below;
MaxPosition = ZPositions.First(Function(zp1) zp1.GreyStDev = ZPositions.Max(Function(zp2) zp2.GreyStDev))
This allowed be to set up an auto-focus routine which loops through a number of images (as a test), stores the position (e.g. image number in this case) and the intensity edge information, and at the end then pull out the strongest intensity information which forms the best auto-focus point in my case
If you were to write an API that is called from Lua (which is 1-based, e.g. table indices start at 1), would you apply the same rule to your API?
For example, say your API had a function called GetFoo(x, y) which returned a Foo at the coordinate (x,y). Would you start your coordinate axes at (0,0) or (1,1) for the API, assuming that in the system itself (say written in C or C++ which are 0-based) these things start at (0,0) (so if you used the Lua convention you would always have to subtract 1 when retrieving numbers for these kinds of operations from the lua stack).
I haven't used Lua, but I would say for a coordinate system specifically (0,0) would be preferred.
For everything else, as long as you state it clearly in the documentation, by all means start indices at 1.
You could also just use the 0 index in your table/arrays. The only inconvenience is the standard libraries use the 1-based convention. So things like table.sort, string operations, etc ... will ignore the table[0] element.