Using DigitalMicrograph calibrations in scripts - dm-script

I am trying to use rotations and calibrations between different microscope coordinate systems (e.g. beam tilt, stage shift, CCD image/diffraction pattern) in DigitalMicrograph by using the calibrations present in the "Microscope Data.gtg" file. To do this I load the file and pull out the different calibrations. Is there an easier way to access individual calibrations?
To determine the orientation of the stage the script needs to know at what Magnification the Stage calibration was performed. In old versions of DigitalMicrograph there was a global tag called "Calibrations:Stage Calibration:Acquisition Magnification". However I could not find this tag in GMS2.1.

There have been changes in the code regarding calibrations between GMS 1 and GMS 2 which indeed are as you've described.
There is no easy access to the required information via the scripting language.
However, the solution you have described is indeed the best workaround.

Related

How to add a point to an AutoCAD drawing using the command line

I work in the field of GIS and I am working with contours, point heights and other datasets which have elevation related information.
In a GIS software (for example QGIS), I can extract the geometry attributes of a line, polygon or a set of points. Consequently, I can also write the set of points and their geometrical attributes to a text file through Python scripting.
There a person who does not use QGIS and is specifically unaware of GIS based techniques. Consequently, the files that I generate using QGIS are completely useless to him. Further, he also works on a Mac OS based computer and therefore the GIS based Autocad is also not available with him.
Therefore, the question how can I provide a set of points with their coordinates, or a polygon with points to AutoCAD via a command line. For example, do we have a command or set of commands like
SET ORIGIN TO 50000,5000
ADD POINT 51000, 51000
...
In AutoCAD he may use command SCRIPT.
You crerate .scr text file.
Example file with coordinates You can find here

How do I configure geofencing?

do you guys know how to configure geofencing in Toloka? I know that there are templates for such spatial tasks but I need some tips on the configuration itself. Thanks!
Everyone who creates a task can flexibly customize the template, write their own js code, and set up the photo and coordinate verification process in their own way. You can use a typical template for field tasks as a basis.
You need to add the following parameters to file-img (https://yandex.ru/support/toloka-requester/concepts/t-components/upload-picture.html):
Image data must contain coordinates. requiredCoordinates=true — Coordinates are mandatory.
compress=false — Render the image without changes or compression (because your instructions require "Resolution of at least 6 megapixels (3000x2000 pix or similar)".
You can also add your own js code which will check, for example, the distance between the performer and your location.

Is it possible to create a shapekey in Blender which has in-between targets, like in Maya?

I have been exploring Maya's blendshapes for the past weeks, and it has one very interesting feature called in-between targets. It basically allows one blendshape to include intermediate states between the two basic targets (modified and original objects). I created a couple and tried to export in FBX to use them in Blender, and I get an error message. This error does not occur when I import a FBX file without in-between targets in the blendshapes. Also, I wasn't able to find a pure Blender solution to create Shapekeys with in-between targets, which got me wondering if it is even possible.
Any help is appreciated.
Blender only provides support for one vector per vertex per shapekey so the in-between targets will not be able to be imported directly. I would suggest you report this as a bug, while I don't expect in-between shapekeys to be added any time soon, the fbx importer should be fixed to not break on these files.
One thing you could to try is to see if you can export the shapekeys to an mdd or pc2 file. Blender has a mesh cache modifier that can be used for these files. From 2.78 a new option to try is exporting to an alembic archive as outlined here
While blender doesn't support in-between shapekeys, you can create a comparable result using drivers. A single control can be made that can enable a series of shapekeys one after the other.

Share backgrounds between Cucumber files?

I have some Cucumber scenarios, for which I created the following files:
create_extended_search.feature
activate_extended_search.feature
edit_extended_search.feature
delete_extended_search.feature
Within these files, I have several scenarios.
Three of the files use the same background, and it would be nice to be able to place it into one file (e.g. support/backgrounds.rb) and then reference it from the feature files.
Is this possible somehow? Thanks.
I believe you would have to create a step that is made up of the steps in your current background. Then call that step in the background for each feature.
There's no notion of 'include'ing feature files in Cucumber. As Justin points out, you can create a single step representing what you want as a background, and call that where appropriate. An alternative is to use a Before hook to perform certain tasks in advance of scenarios that you mark with a specific tag.
Personally, I'd treat this problem as something of a red flag, and start asking if my feature files were split up in the best way possible. Frequently if I find myself bemoaning the inability to include other feature files, or conversely, wishing I could exclude certain scenarios from running my background, it's a very strong sign that my feature files are too finely sliced up, or I'm trying to cram unrelated functionality together and need to split it up further.

Best approach to perform a CMMI Physical Configuration Audit?

The organization I currently work for an organization that is moving into the whole CMMI world of documenting everything. I was assigned (along with one other individual) the title of Configuration Manager. Congratulations to me right.
Part of the duties is to perform on a regular basis (they are still defining regular basis, it will either by quarterly or monthly) a physical configuration audit. This is basically a check of source code versions deployed in production to what we believe to be the source code versions in production.
Our project is a relatively small web application with written in Java. The file types we work with are java, jsp, xml, property files, and sql packages.
The problem I have (and have expressed but seem to be going ignored) is how am I supposed to physical log on to the production server and verify file versions and even if I could it would take a ridiculous amount of time?
The file versions are not even currently in the file(i.e. in a comment or something). It was suggested that we place visible version numbers on each screen that is visible to the users also. I thought this ridiculous also, since the screens themselves represent only a small fraction of the code we maintain.
The tools we currently use are Netbeans for our IDE and Serena Dimensions as our versioning tool.
I am specifically looking for ideas on how to perform this audit in a hopefully more automated way, that will be both accurate and not time consuming.
My idea is currently to add a comment to the top of each file that contains the version number of that file, a script that runs when a production build is created to create an XML file or something similar containing the file name and version file of each file in the build. Then when I need to do an audit I go to the production server grab the the xml file with the info, and compare it programmatically to what we believe to be in production, and output a report.
Any better ideas. I know this has to have been done already, and seems crazy to me that I have not found any other resources.
You could compute a SHA1 hash of the source files on the production server, and compare that hash value to the versions stored in source control. If you can find the same hash in source control, then you know what version is in production. If you can't find the same hash in source control, then there are untracked modifications in production and your new job title is justified. :)
The typical trap organizations fall into with the CMMI is trying to overdo everything. If I could suggest anything, it'd be start small & only do what you need. So consider any problems that you may have had in the CM area peviously.
The CMMI describes WHAT an organisation should do, but leaves the HOW up to you. The CMMI specification, chapter 2 is well worth a read - it describes the required, expected, and informative components of the specification - basically the goals are required, the practices are expected, and everything else is informative. This means there is only a small part of the specification which a CMMI appraiser can directly demand - the goals. At the practice level, it is permissable to have either the practices as described, or acceptable alternatives to them.
In the case of configuration audits, goal SG3 is "Integrity of baselines is established and maintained". SP3.2 says "Perform configuration audits to maintain integrity of the configuration baselines." There is nothing stated here about how often these are done, or how long they may take.
In my previous organisation, FCA/PCA was usually only done as part of the product release process, and we used ClearCase as the versioning tool, with labels applied across the codebase to define baselines. We didn't have version numbers in all the source files, nor did we have version numbers on all the products screens - the CM activity was doing the right thing & was backed up by audits, and this was never an issue in any CMMI appraisal.
We could use the deltas between labels to look at what files had changed, perform diffs to see the actual code changes. An important part of the process is being able to link those changes back to either a requirement/bug report/whatever the reason was which initiated the change.
Our auditing did use scripts to automate the process, but these were in-house developed scripts are specific to ClearCase - basically they would list all the files, their versions in the CM system, and the baseline/config item to which they belonged.
can't you use your source control for this? if you deploy a version and tag your sourcecontrol with that deployment, you can then verify against the source control system