How can I use NumPy in IronPython for Revit API? - numpy

I'm writing a script in Revit API by using python. I'm looking to use NumPy since I'm trying to generate a lattice grid of points to place families to. However, I know NumPy is not compatible with IronPython since it's written in CPython. Is there a solution for this? If not, is there any good way to generate a lattice grid of points without using external packages like NumPy?

pyRevit has a CPython engine available.
The post I linked was the beta announcement. It is now available in pyRevit master release.
Some people have already sucessfully used pandas and numpy.
pyRevit use pythonnet

Related

Is there a Pandas Profiling like implemention built on polars?

We use Pandas and Pandas Profiling extensively in our projects to generate profile reports. We were going to explore using Polars as a Pandas alternative and wanted to check if there were any implementations like Pandas Profiling built on top of Polars?
I have searched a bit before posting this question and did not find any similar implementations. So, wanted to check if anyone else had an idea about the same?
I'm not aware of any project implemented natively with Polars. That said, there's an easy way to use Pandas Profiling with Polars.
From the Other DataFrame libraries page of the Pandas Profiling documentation:
If you have data in another framework of the Python Data ecosystem, you can use pandas-profiling by converting to a pandas DataFrame, as direct integrations are not yet supported.
On the above page, you'll see suggestions for using Pandas Profiling with other dataframe libraries, such as Modin, Vaex, PySpark, and Dask.
We can do the same thing easily with Polars, using the to_pandas method.
Adapting an example from the Quick Start Guide to use Polars:
import polars as pl
import numpy as np
from pandas_profiling import ProfileReport
df = pl.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])
profile = ProfileReport(df.to_pandas(), title="Pandas Profiling Report")
profile.to_file('your_report.html')
In general, you're always one method call away from plugging Polars into any framework that uses Pandas. I myself use to_pandas so that I can use Polars with my favorite graphing library, plotnine.
(As an aside, thank you for sharing the Pandas Profiling project here. I was quite impressed with the output generated, and will probably use it on projects going forward.)

How to create an _Arg, _Retval, _If operation in tensorflow?

I'm trying to test all the operations available in tensorflow. For example, we can find 'Conv2d' in tf.nn module.
There are some operations started with an '_', e.g, '_Arg', '_ArrayToList', '_Retval'. I looked into the tensorflow source code, but still can't find how to create an operation '_Arg'. Please give me some instructions of how to find these operations, or what does these operations do?
Those operations are for an internal purpose, they are implemented in c++ so you'll need to download the source code, code (in c++) your own tests, compile and run them, since most of those operations do not have a Python wrapper.
Here you can find the c++ api.
This tutorial may help you if you are starting with tf operation. It does not do what you want, as it works with custom public operations.
You may have a look to the tests already implemented in tf code, fore example this test file.
However, I will strongly recommend that you reconsider if you really need to test those functions. Testing every single function from TensorFlow, even the internal ones, is going to be a hard job.

SIP-compatible Python plate solving package

I need to perform accurate pixel to world coordinate transformations on FITS files that were originally created using Maxim DL. Maxim uses Pinpoint for plate solving which generates TRi_j distortion coefficients. These are incompatible with the astropy.wcs coordinate transformation functions which I was proposing to use as these assume SIP distortion coefficients.
I'm therefore looking for options to re-platesolve the FITS files to generate SIP coefficients.
So far all I've found is astrometry.net but this is an on-line service. I'm really looking for offline platesolving (preferably against a local copy of the GSC) that I can perform synchronously as part of my app's workflow.
Are there any Astropy-affiliated (or other) Python packages that perform SIP-compatible platesolving against the GSC?
Alternatively, are there any equivalents to wcs.all_pix2world that can use TRi_j distortion coefficients so I can work with the Maxim DL data?
Many thanks
Nigel
In addition to SIP coefficients, the astropy.wcs methods will work with TPV distortion coefficients. This means you can use the output of the SCAMP astrometric solver directly with astropy.wcs. If you wish to convert TPV coefficients to the SIP form, you can use the sip_tpv package for which I am the lead contributor. I don't know of a Python package wrapping SCAMP -- I have wrapped it for the Zwicky Transient Facility pipeline but that code is not public.
You could do:
from astropy.io import fits
from astropy.wcs import WCS
hdul = fits.open(fitsfilename)[0]
wcs = WCS(hdul.header)
ax = fig.gca()
ax.scatter([34], [3.2], transform=ax.get_transform('world'))
(Based on this Q.)

Customise Tensorboard with buttons, sliders, personal functions

Is it possible to customise Tensorboard with our own buttons, sliders and colours to create a sort of web application ?
Thanks !
Yes, you are able to do this by creating a Tensorboard plugin. This blog post can give you a good idea of what capabilities you can add via a plugin. You can follow this tutorial to get started.
Broadly speaking, the 3 parts of a Tensorboard plugin are:
A summary op that gathers the data you need from the Tensorflow session.
A post-processing python script that serves that data to the web client.
Front-end code to display and interact with the data.
As it sounds like your interests are mostly just pertaining to the presentation, it is likely that you can use data already already gathered by Tensorflow and steps 1 & 2 may be very small or non-existent for your case.
The documentation for Tensorboard plugins has moved to here.

What Tensorflow API to use for Seq2Seq

This year Google produced 5 different packages for seq2seq:
seq2seq (claimed to be general purpose but
inactive)
nmt (active but supposed to be just
about NMT probably)
legacy_seq2seq
(clearly legacy)
contrib/seq2seq
(not complete probably)
tensor2tensor (similar purpose, also
active development)
Which package is actually worth to use for the implementation? It seems they are all different approaches but none of them stable enough.
I've had too a headache about some issue, which framework to choose? I want to implement OCR using Encoder-Decoder with attention. I've been trying to implement it using legacy_seq2seq (it was main library that time), but it was hard to understand all that process, for sure it should not be used any more.
https://github.com/google/seq2seq: for me it looks like trying to making a command line training script with not writing own code. If you want to learn Translation model, this should work but in other case it may not (like for my OCR), because there is not enough of documentation and too little number of users
https://github.com/tensorflow/tensor2tensor: this is very similar to above implementation but it is maintained and you can add more of own code for ex. reading own dataset. The basic usage is again Translation. But it also enable such task like Image Caption, which is nice. So if you want to try ready to use library and your problem is txt->txt or image->txt then you could try this. It should also work for OCR. I'm just not sure it there is enough documentation for each case (like using CNN at feature extractor)
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/seq2seq: apart from above, this is just pure library, which can be useful when you want to create a seq2seq by yourself using TF. It have a function to add Attention, Sequence Loss etc. In my case I chose that option as then I have much more freedom of choosing the each step of framework. I can choose CNN architecture, RNN cell type, Bi or Uni RNN, type of decoder etc. But then you will need to spend some time to get familiar with all the idea behind it.
https://github.com/tensorflow/nmt : another translation framework, based on tf.contrib.seq2seq library
From my perspective you have two option:
If you want to check the idea very fast and be sure that you are using very efficient code, use tensor2tensor library. It should help you to get early results or even very good final model.
If you want to make a research, not being sure how exactly the pipeline should look like or want to learn about idea of seq2seq, use library from tf.contrib.seq2seq.