How to add a point to an AutoCAD drawing using the command line - vba

I work in the field of GIS and I am working with contours, point heights and other datasets which have elevation related information.
In a GIS software (for example QGIS), I can extract the geometry attributes of a line, polygon or a set of points. Consequently, I can also write the set of points and their geometrical attributes to a text file through Python scripting.
There a person who does not use QGIS and is specifically unaware of GIS based techniques. Consequently, the files that I generate using QGIS are completely useless to him. Further, he also works on a Mac OS based computer and therefore the GIS based Autocad is also not available with him.
Therefore, the question how can I provide a set of points with their coordinates, or a polygon with points to AutoCAD via a command line. For example, do we have a command or set of commands like
SET ORIGIN TO 50000,5000
ADD POINT 51000, 51000
...

In AutoCAD he may use command SCRIPT.
You crerate .scr text file.
Example file with coordinates You can find here

Related

How do I configure geofencing?

do you guys know how to configure geofencing in Toloka? I know that there are templates for such spatial tasks but I need some tips on the configuration itself. Thanks!
Everyone who creates a task can flexibly customize the template, write their own js code, and set up the photo and coordinate verification process in their own way. You can use a typical template for field tasks as a basis.
You need to add the following parameters to file-img (https://yandex.ru/support/toloka-requester/concepts/t-components/upload-picture.html):
Image data must contain coordinates. requiredCoordinates=true — Coordinates are mandatory.
compress=false — Render the image without changes or compression (because your instructions require "Resolution of at least 6 megapixels (3000x2000 pix or similar)".
You can also add your own js code which will check, for example, the distance between the performer and your location.

How to create business ready reports from jupyter notebooks? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I have took quite some time to get a reasonable answer to my inquiry by myself but ran into a dead end and hope you guys can help me.
Issue:
For the purpose of business reporting, I have created some juypter notebooks which include multiple pandas tables and seaborn / matplotlib plots as code cell output with some occasional markdown cells in between to provide explanations. Now, I want these reports to be in a business-ready format to share them with stakeholders. With business-ready I intend the following requirements:
The report does not include code
Output file format: PDF
The report includes a title page with title, additional information (e.g. date of analysis) and a table of contents
Tables are in a appealing visual format that allows easy reception of information
The report is well structured
... and I am not able to get all these requirements together.
So far, I prefer to work with vscode and use the browser based juypter notebook if necessary (which unfortunately lacks some functionalities).
What I have tried:
(1) this was a no brainer, I just --no-input to the nbconvert command in the anaconda shell and whatever I do regarding the next points, it excludes the code
(2) There are two ways I could find so far, which influence all subsequent steps/requirements
Way 1 ("html detour"): I convert the .ipynb to html and print it as PDF (this is a 2-step process, thus I see it as a detour)
Way 2 ("latex conversion"): I convert it to a PDF via nbconvert --to pdf and it uses latex in the background to create a pdf
(3) ...and here start the issues:
html detour: I can get a toc via the nbextension extension for jupyter notebooks and with it, I can use either the H1 header level as title or include an extra markdown cell and increase the font size with an html command such that it looks appealing. Additional information are added manually in extra code cells. However, the toc only works in the browser version of jupyter, which results in writing the analysis in vscode, going to the browser to add the toc, converting it in the shell, open the html and print it as pdf...
latex conversion: I can set up a latex template, which is included in the nbconvert command that includes a toc by design. However, it either picks up the filename as title automatically or a title I can set in the metadata of the notebook, which I can only edit from the browser. Further, the date of conversion is added below the title automatically as well, which might note be the date of the analysis in case I have to reconvert it because someone wants a minor change or something. Thus, I cannot turn auto title and date off (at least I couldn't find an option so far) and I have multiple steps as well.
(4) This one makes eventually the difference in the usability of the report
html detour: The format in the html file itself is the quite appealing format you usually get from tables using display() command on a table in jupyter (which is used anyway if you just call a variable in juyper without print()) or if you build a table in a markdown cell. The table has a bold header and every other row has a grey background. Using pandas .style method, I can format the table in the html file very nicely with red fon color for negative values only or percentage bars as cell background. However, I loose all these formats when I print the PDF. Then its just a bold header, a bold line splitting header and body and the rows. Further, all cell output tables are left aligned in the html (and I refer to the table itself, not its content) and the markdown tables are centered, which looks strange or rather - and this is the issue - unprofessional. The benefit, however, is that these tables are somewhat auto-adjusted to a letter size format in a certain range if the table would be wider than a letter page.
latex conversion: By design, the tables are not converted. I have to use pandas.set_option(display.large_repr, True) to convert all subsequent pandas table output or add .to_latex()to every single pandas table. This has several downfalls. Using this, all tables are displayed as the code that would be required to build a table in latex and while doing the analysis, this is often harder to interpret... especially if you want to find errors. Adding it only when the analysis is done, creates just unnecessary iterations. Further, I want to use the last report as template for the next and would have to delete the command, do my stuff and add it again. Wider tables taht don't fit the letter size are just cut of regardless of how much wider they are compared to the page size and I would have to check every table (last report were 20+) whether everything is included. ...and headers become longer if they include explanatory information. And finally, the latex table format eventually looks professional, but more scientifically professional and not business professional and can discourage one or another reader in my experience.
(5) So, since everything is made from cells and converted automatically, you get some strange output with headers on the end of one page and text and tables and plots on the next ...or pages with just a plot and so on...
html detour Its hard to describe the general issues I have. If you have ever printed a website, you have probably got some weird text bulk that looks unstructured with occasional half white pages where they should not be. Thats what you get, when printing the html file of a jupyter. It would help, if I could include a forced pagebreak and you can find several versions of adding pagebreaks in the cell or metadata of cells but they do not work since the html is created with a high level setting prohibiting a pagebreak. Thus, I could only go in the html code and add page breaks manually. Manuel effort I would like to avoid.
latex conversion:Well, \pagebreakworks.
So, due to the issues above, I currently tend towards the html detour but it does not make it look like an appealing report. I have tried several latex templates but was usually dissatisfied with the output since the .to_latex command makes it tedious and the report eventually looks like a scientific paper and not like a business report. The thing is, while this looks like a high standard, all these requirements are fulfilled by R-mardkown notebooks basically out of the box with slight additions to the yaml command in the top of the file. But I cannot use them for the report I want to create.
So, after this long intro (and I thank everybody for taking the time to read it), my question is how do I get appealing reports from a jupyter notebook?
Thanks!!!!!
Honestly, I'm in the same boat as you. It seems quite challenging to generate publication-ready PDF Reports natively from JupyterLab / Jupyter using nbconvert and friends.
Solution (that I'm using): What I can recommend is a different tool that will help you make amazing PDF reports. It's using RStudio's Rmarkdown (completely free) and the new ability to use Python from RStudio. I'm going to be teaching this in my R/Python Teams Course (course waitlist is up).
Report Example
Here's how I'm doing it in my course:
Step 1 - Install Rstudio IDE 1.4+ & R 4.0+
Head over to Rstudio and install their IDE. You'll also need to install R.
Step 2 - Create a Project
Step 3 - Set Python Environment of your Project
Go to Tools > Project Options. Select the Python Interpreter.
Step 4 - Begin Coding Markdown and Python
Use "Python Code Chunks".
Step 5 - Knit to PDF
Note that this requires some form of LatTex. You can install easily with this package: tinytex.
Step 6 - Check out your PDF Report
Looks pretty slick.
Try it out and see if it works for you.
I'd go like this from terminal (this is to convert to Word, but also PDF is available, just change your last output to .pdf):
jupyter nbconvert --to html notebook.ipynb --TemplateExporter.exclude_input=True && pandoc notebook.html -s -o results.docx --resource-path=img --toc
Apart from installation and other pieces there are several aspects which make usage of nbconvert for files conversion quite a tedious task .
Anyone tried out the Jupyter Executable Notebook or R markdown methods ( they are useful but there is an extra cost of time and efforts which makes it less feasible )
What i found to be very useful is there are many websites serving this purpose it quick, easy and hassle free .
I use this IPYNB TO PDF , there are others as well .

Read Sentinel-2 L1C view angles from rasterio

I am trying to read the view angles from a Sentinel-2 image (L1C SAFE compact format) for executing an atmospheric correction algorithm. I can get those values by parsing the file MTD_TL.xml, but I am not able to get them through rasterio.
I have tried to access to those data using the xml:SENTINEL2 and the xml:VRT metadata domains, but I can only access to the values from the file MTD_MSIL1C.xml (the main metadata file).
The whole point of using rasterio is being able of using GDAL's virtual file system, as the images will be read from S3 buckets. Any alternatives for easily reading MTD_TL.xml through the virtual file system would be also valid (and really appreciated).
Thank you!!
I answer to myself.
I could not find how to get the values I require, but according to https://gdal.org/user/virtual_file_systems.html the function VSIFOpenL may be used for opening the file. After that, manual parsing will do the trick :)
Ps. I must read the documentation slowly.

Is it possible to create a shapekey in Blender which has in-between targets, like in Maya?

I have been exploring Maya's blendshapes for the past weeks, and it has one very interesting feature called in-between targets. It basically allows one blendshape to include intermediate states between the two basic targets (modified and original objects). I created a couple and tried to export in FBX to use them in Blender, and I get an error message. This error does not occur when I import a FBX file without in-between targets in the blendshapes. Also, I wasn't able to find a pure Blender solution to create Shapekeys with in-between targets, which got me wondering if it is even possible.
Any help is appreciated.
Blender only provides support for one vector per vertex per shapekey so the in-between targets will not be able to be imported directly. I would suggest you report this as a bug, while I don't expect in-between shapekeys to be added any time soon, the fbx importer should be fixed to not break on these files.
One thing you could to try is to see if you can export the shapekeys to an mdd or pc2 file. Blender has a mesh cache modifier that can be used for these files. From 2.78 a new option to try is exporting to an alembic archive as outlined here
While blender doesn't support in-between shapekeys, you can create a comparable result using drivers. A single control can be made that can enable a series of shapekeys one after the other.

Using DigitalMicrograph calibrations in scripts

I am trying to use rotations and calibrations between different microscope coordinate systems (e.g. beam tilt, stage shift, CCD image/diffraction pattern) in DigitalMicrograph by using the calibrations present in the "Microscope Data.gtg" file. To do this I load the file and pull out the different calibrations. Is there an easier way to access individual calibrations?
To determine the orientation of the stage the script needs to know at what Magnification the Stage calibration was performed. In old versions of DigitalMicrograph there was a global tag called "Calibrations:Stage Calibration:Acquisition Magnification". However I could not find this tag in GMS2.1.
There have been changes in the code regarding calibrations between GMS 1 and GMS 2 which indeed are as you've described.
There is no easy access to the required information via the scripting language.
However, the solution you have described is indeed the best workaround.