what exactly is `xla_client` in the jax library? - tensorflow

If you read the jax source code you'll hit something called xla_client. Often imported like this
from . import xla_client
This implies that xla_client is a python module, but I can't find any file with that name or reference to a variable of that name.
I assume that it is related to https://pypi.org/project/jaxlib/, but this package just links back to the jax source code.
Can anybody clue me in?

The file you're referring to is stored at https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/xla/python
Let me expound further: xla_client is partly a wrapper around a specially compiled c++ file called xla_extension.so, for example see
from . import xla_extension as _xla
and numerous references to _xla throughout xla_config. The source for this file is https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/xla.cc, which we know because it says so quite clearly in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/python/BUILD
pybind_extension(
name = "xla_extension",
srcs = [
"xla.cc",
],
...

Related

How to convert multiple LCI ecospold files to a custom excel format/ how to use parse_file from pyecospold/ how to read ecospold into brightway

I have multiple ecospold (version 1) files with LCI data that I want to convert to a custom excel format. I need all data given in the ecospold file. For my own convinience I want to use python to complete this task.
My research until now has lead me to the following conclusions:
There exist at least two converters (by GLAD and openLCA) to convert ecospold formats (1 and 2) to e.g. the ILCD. But those formats are not helping me to go anywhere, since I need to have all the data accessible in python and in order to then write it into my custom excel format.
To get the data in python, the package pyecospold (https://github.com/sami-m-g/pyecospold) seems to be a suitable choice.
According to the README that can be found at the pyecospold github repository,
ecoSpold = parse_file("data/v1/v1_1.xml") # Replace with your own XML file
should do the job. So I implemented the following lines:
import os
from pyecospold import parse_file, save_file, Defaults
from lxml import etree
cd = os.getcwd()
path_input = cd + r'\inputs\ecospold_test.xml'
# Parse the required XML file to EcoSpold class.
es = parse_file('inputs/ecospold_test.xml')
Now I run into the error:
TypeError: parse_file() missing 2 required positional arguments: 'schema_path' and 'ecospold_lookup'
I understood that a schema in xsd format is needed, therefore I got the schema files from the github and amended my last line of code:
es = parse_file('inputs/ecospold_test.xml', 'inputs/schemas/v1/EcoSpold01Dataset.xsd')
Now there is still one argument missing:
TypeError: parse_file() missing 1 required positional argument: 'ecospold_lookup'
Since I have no experience in parsing xml files in python, I have no idea what to do with this. Additionally, I am confused why the README does not say anything about those additionally needed arguments.
My second idea was to use brightway to get the data into python. But since brightway itself is quite an extensive package, I could not find a simple (or any) way to do this. (Sadly, the notebooks linked in the answer of this question Import Ecoinvent 2.2 Ecospold files into Brightway do not exist anymore)
Another option would of course be to write my own parser. But because I am lacking experience and pyecospold does exactly this (at least in my understanding), I would like to avoid this option.
Additionally, there in openLCA it is possible to read in ecospold files and then export them to an excel format. From this excel format I could of course make my custom excel format. The problem here is that I have no idea how to automize this, because I do not want to read in and export each file individually and manually in openLCA.
If anyone has an idea on how to solve one of my subproblems or a good alternative on how to solve my general problem, I would be very thankful. :)

Python.Net: how to execute modules in packages?

I'm not a Python programmer so apologies if I don't get some of the terminology right (pacakages, modules)!
I have a folder structure that looks something like this:
C:\Test\System\
C:\Test\System\intercepts\
C:\Test\System\intercepts\utils1\
C:\Test\System\intercepts\utils2\
The last three folders each contain an empty __init__.py folder, while the latter two folders (\utils1, \utils2) contain numerous .py modules. For the purposes of my question I'm trying to execute a function within a module called "general.py" that resides in the \utils1 folder.
The first folder (C:\Test\System) contains a file called "entry.py", which imports the .py modules from all those sub-folders:
from intercepts.utils1 import general
from intercepts.utils1 import foobar
from intercepts.utils2 import ...
..etc..
And here is the C# code that executes the above module then attempts to call a function called "startup" in a module called "general.py" in the \utils1 folder:
const string EntryModule = #"C:\Test\System\entry.py";
using (Py.GIL())
{
using (var scope = Py.CreateScope())
{
var code = File.ReadAllText(EntryModule);
var scriptCompiled = PythonEngine.Compile(code, EntryModule);
scope.Execute(scriptCompiled);
dynamic func = scope.Get("general.startup");
func();
}
}
However I get a PythonException on the scope.Execute(...) line, with the following message:
No module named 'intercepts'
File "C:\Test\System\entry.py", line 1, in <module>
from intercepts.utils1 import general
I'm able to do the equivalent of this using IronPython (Python 2.7), so I assume I'm doing something wrong with Python.Net (rather than changes to how packages work in Python 3.x).
I'm using the pythonnet 3.0.0 NuGet package by the way.
Edit
I've tried importing my "entry.py" module as follows:
dynamic os = Py.Import("os");
dynamic sys = Py.Import("sys");
sys.path.append(os.path.dirname(EntryModule));
Py.Import(Path.GetFileNameWithoutExtension(EntryModule));
It now appears to get a little further, however there's a new problem:
In the "entry.py" module you can see that it first imports a module called "general", then a module called "foobar". "foobar.py" contains the line import general.
When I run my C#, the stack trace is now as follows:
No module named 'general'
File "C:\Test\System\intercepts\utils1\foobar.py", line 1, in <module>
import general
File "C:\Test\System\entry.py", line 2, in <module>
from intercepts.utils1 import foobar
Why can't the second imported module ("foobar") "see" the module that was imported immediately before it ("general")? Am I even barking up the right tree by using Py.Import() to solve my original issue?
This turned out to be a change in how Python 3 handles imports, compared to 2, and nothing to do with Python.Net.
In my "foobar.py" module I had to change import general to from . import general. The issue is explained here but I've included the pertinent section below:

Is there a way to import test files into redoc x-codeSamples source field?

The redoc guide specifies using raw text as source in a code sample:
https://github.com/Redocly/redoc/blob/master/docs/redoc-vendor-extensions.md#x-codeSamples
like so:
lang: JavaScript
source: console.log('Hello World');
however I would like to keep my OpenApi3.0 YAML a living document, so would prefer to actually import code directly from test files, ex:
lang: JavaScript
source: #/tests/js_api_test.js
where the contents of js_api_test.js is just:
console.log('Hello World');
this way the imported code could be guaranteed to work as long as the tests are passing - keeping the document living.
Given I am already relying on generating lots of boilerplate off of the YAML file it seems ideal to keep all aspects of the file living.
Thanks in advance!
found answer:
label: 'Python'
source: {$ref: test.py}
will import the relative path file test.py there

How do I find the version and authority of a Perl 6 module?

In Bar.pm, I declare a class with an authority (author) and a version:
class Bar:auth<Camelia>:ver<4.8.12> {
}
If I use it in a program, how do I see which version of a module I'm using, who wrote it, and how the module loader found it? As always, links to documentation are important.
This question was also asked on perl6-users but died before a satisfactory answer (or links to docs) appeared.
Another wrinkle in this problem is that many people aren't adding that information to their class or module definitions. It shows up in the META.json file but not the code.
(Probably not a satisfying answer, because the facts of the matter are not very satisfying, especially regarding the state of the documentation, but here it goes...)
If the module or class was versioned directly in the source code à la class Bar:auth<Camelia>:ver<4.8.12>, then any code that imports it can introspect it:
use Bar;
say Bar.^ver; # v4.8.12
say Bar.^auth; # Camelia
# ...which is short for:
say Bar.HOW.ver(Bar); # v4.8.12
say Bar.HOW.auth(Bar); # Camelia
The ver and auth methods are provided by:
Metamodel::ClassHOW (although that documentation page doesn't mention them yet)
Metamodel::ModuleHOW (although that documentation page doesn't exist at all yet)
Unfortunately, I don't think the meta-object currently provides a way to get at the source path of the module/class.
By manually going through the steps that use and require take to load compilation units, you can at least get at the prefix path (i.e. which location from $PERL6LIB or use lib or -I etc. it was loaded from):
my $comp-spec = CompUnit::DependencySpecification.new: short-name => 'Bar';
my $comp-unit = $*REPO.resolve: $comp-spec;
my $comp-repo = $comp-unit.repo;
say $comp-repo.path-spec; # file#/home/smls/dev/lib
say $comp-repo.prefix; # "/home/smls/dev/lib".IO
$comp-unit is an object of type CompUnit.
$comp-repo is a CompUnit::Repository::FileSystem.
Both documentations pages don't exist yet, and $*REPO is only briefly mentioned in the list of dynamic variables.
If the module is part of a properly set-up distribution, you can get at the meta-info defined in its META6.json (as posted by Lloyd Fournier in the mailing list thread you mentioned):
if try $comp-unit.distribution.meta -> %dist-meta {
say %dist-meta<ver>;
say %dist-meta<auth>;
say %dist-meta<license>;
}

Compiling and Llnking to used modules in external directory Compaq Fortran command prompt

I've already asked a similar question, here:
Linking to modules in external directory Compaq Visual Fortran command prompt
And I thought that the first answer was correct (that is, in the manual they say you can simply specify the path name before the module), but after deleting the temporary files in my library folder, this approach seemed to stop working. Trying with the /include[:path] approach, here is my .bat file:
df /include:..\FORTRAN_LIB\ __constants
myIO griddata_mod myfdgen myDiff magneticField /exe:magneticField
And an error is returned saying:
__constants
myIO
griddata_mod
myfdgen
myDiff
magneticField
f90: Severe: No such file or directory
... file is '__constants'
Again, I apologize that this question is VERY specific, but it seems like it should be simple and does not work at all.
p.s. Originally, I was using:
df ..\FORTRAN_LIB\__constants ..\FORTRAN_LIB\myIO
..\FORTRAN_LIB\griddata_mod ..\FORTRAN_LIB\myfdgen
..\FORTRAN_LIB\myDiff magneticField /exe:magneticField
But, as I've said, it stopped working after I deleted the temporary files in my FORTRAN_LIB folder. Also note, these .bat files used only one line, I've broken them into several lines just for readability. I would prefer using the /include[:path] option since that seems like a better solution.
Okay, so I think I figured out a workaround at the very least. I understood that the /include[:dir] specifies to search in "dir" for included files. But it seemed from documentation, that this also specifies to search for USEd modules but that doesn't seem to be the case.
My program now looks like this:
include '..\FORTRAN_LIB\__constants.f90'
include '..\FORTRAN_LIB\computeError.f90'
include '..\FORTRAN_LIB\griddata_mod.f90'
include '..\FORTRAN_LIB\myfdgen.f90'
include '..\FORTRAN_LIB\myDiff.f90'
include '..\FORTRAN_LIB\myIO.f90'
program magneticField
use constants
use computeError_mod
use griddata_mod
use myfdgen_mod
use myDiff_mod
use myIO_mod
implicit none
...
And my DF command like this:
df magneticField /exe:magneticField
And everything seems to work fine. It would be nicer to have the /include[:dir] option, but so long I'm able to reach in a separate directory, I'm satisfied. If anyone can find a better solution I'll switch the checkmark. I hope this helps with anyone else who was confused like me.