How to create a tree visualization from a nested dictionary in Python 3 - numpy

I would like to make a graphical visualization of a nested dictionary as a simple tree structure. I have tried several different solutions, but they are too old (python 2.7) or I get weird error messages, even after reinstalling the packages needed.
Here is an example of the nested dictionary. I can change the end nodes value to be whatever value is most useful and the dictionary should be able to scale and be used on a way bigger file structure.
{
"Folder1": {},
"Folder2": {
"Folder21": {},
"Folder22": {}
},
"Document1": {},
"Document2": {},
"Folder3": {
"Document31": {},
"Folder32": {
"Document 321": {},
"Document 322": {}
},
"Folder33": {
"Document331": {}
},
"Folder34": {
"Document341": {}
}
}
}
I have tried solutions using Mapping, NetworkX, GraphViz, pandas, matplotlib 3.1.3, Json, d3py 0.2.3, pyplot, numpy 1.18.1 and Pydot (pydot2 1.0.33, pydotplus 2.0.2).
Using pip3 18.1 to install the packages in Ubuntu 19
Then goal is creating something like this post has, but it is 7 years old, and I can't get it to work after translating from Python2 to Python3.
Python library for creating tree graphs out of nested Python objects (dicts)

Related

DeepPavlov error loading the model from Tensorflow (from_tf=True)

I'm trying to load the ruBERT model into Deeppavlov as follows:
#is a dict
config_path = {
"chainer": {
"in": [
"x"
],
"in_y": [
"y"
],
"out": [
"y_pred_labels",
"y_pred_probas"
],
"pipe": [
...
}
}
model = build_model(config_path, download=False)
At the same time, I have all the files of the original ruBERT model locally. However, an error throws when building the model:
OSError: Error no file named pytorch_model.bin found in directory ruBERT_hFace2 but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
At the same time, there is nowhere a clear explanation of how to pass this parameter through the build_model function.
How to pass this parameter across build_model correctly?
UPDATE 1
At the moment, the version of Deep Pavlov 1.0.2 is installed.
The checkpoint of the model consists of following files:
Currently there is no way to pass any parameter via build_model. In case of additional parameter you should align the configuration file accordingly. Alternatively you can change it via Python code.
from deeppavlov import build_model, configs, evaluate_model
from deeppavlov.core.commands.utils import parse_config
config = parse_config(f"config.json")
...
model = build_model(config, download=True, install=True)
But first please make sure that you are using the latest version of DeepPavlov. In addition please take a look at out recent article on Medium. If you need a further assistance please provide more details.

How to split Prisma Model into separate file?

I'm learning Prisma ORM from video tutorials and official docs. They are explain and write All model code in one file called schema.prisma. It's ok but, when application grow it became messy. So, how should I separate my Model definition into separate file?
At this point in time Prisma doesn't support file segmentation. I can recommend 3 solutions though.
Option 1: Prismix
Prismix utilizes models and enums to create relations across files for your Prisma schema via a prismix configuration file.
{
"mixers": [
{
"input": [
"base.prisma",
"./modules/auth/auth.prisma",
"./modules/posts/posts.prisma",
],
"output": "prisma/schema.prisma"
}
]
}
Placing this inside of a prismix.config.json file which will define how you'd like to merge your Prisma segmentations.
Option 2: Schemix
Schemix Utilizes Typescript configurations to handle schema segmenting.
For example:
// _schema.ts
import { createSchema } from "schemix";
export const PrismaSchema = createSchema({
datasource: {
provider: "postgresql",
url: {
env: "DATABASE_URL"
},
},
generator: {
provider: "prisma-client-js",
},
});
export const UserModel = PrismaSchema.createModel("User");
import "./models/User.model";
PrismaSchema.export("./", "schema");
Inside of User.model
// models/User.model.ts
import { UserModel, PostModel, PostTypeEnum } from "../_schema";
UserModel
.string("id", { id: true, default: { uuid: true } })
.int("registrantNumber", { default: { autoincrement: true } })
.boolean("isBanned", { default: false })
.relation("posts", PostModel, { list: true })
.raw('##map("service_user")');
This will then generate your prisma/schema.prisma containing your full schema. I used only one database as an example (taken from documentation) but you should get the point.
Option 3: Cat -> Generate
Segmenting your schema into chunk part filenames and run:
cat *.part.prisma > schema.prisma
yarn prisma generate
Most of these if not all of them are referenced here in the currently Open issue regarding support for Prisma schema file segmentation https://github.com/prisma/prisma/issues/2377
This is not yet possible with Prisma. See this outstanding issue for possible workarounds https://github.com/prisma/prisma/issues/2377.
There is a library called Prismix that allows you to write as many schema files as you'd like, here you go the link

QJsonDocument fails with a confusing error

Consider the following piece of code:
from PyQt5.QtCore import QJsonDocument
json = {
"catalog": [
{
"version": None,
},
]
}
QJsonDocument(json)
Under Python 3.7 and PyQt 5.14.2, it results in the following error at the last line:
TypeError: a value has type 'list' but 'QJsonValue' is expected
QJsonDocument clearly support lists: QJsonDocument({'a': []}) works fine.
So, what's going on?
As it turns out, the None value is the reason. Although the docs clearly show that QJsonDocument supports null values, None is not supported in PyQt5: QJsonDocument({'a': None}) results in
TypeError: a value has type 'NoneType' but 'QJsonValue' is expected.
The developers explained that that was an omission. They have resolved the issue.

Create different versions form one bootstrap file with require.js

I develop an iPad/iPhone App web app. Both share some of the resources. Now I wanna build a bootstrap js that looks like this:
requirejs(['app'], function(app) {
app.start();
});
The app resource should be ipadApp.js or iphoneApp.js. So I create the following build file for the optimizer:
{
"appDir": "../develop",
"baseUrl": "./javascripts",
"dir": "../public",
"modules": [
{
"name": "bootstrap",
"out": "bootstrap-ipad.js",
"override": {
"paths": {
"app": "ipadApp"
}
}
},
{
"name": "bootstrap",
"out": "bootstrap-iphone.js",
"override": {
"paths": {
"app": "iphoneApp"
}
}
}
]
}
But this doesn't seems to work. It works with just one module but not with the same module with different outputs.
The only other solution that came in my mind was 4 build files which seems a bit odd. So is there a solution where i only need one build file?
AFAIK the r.js optimizer can only output a module with a given name once - in your case you are attempting to generate the module named bootstrap twice. The author of require.js, #jrburke made the following comment on a related issue here:
...right now you would need to generate a separate build command for each script being targeted, since the name property would always be "almond.js" for each one.
He also suggests:
...if you wanted just one build file to run, you could create a node program and drive the optimizer multiple times in one script file. This example shows using requirejs as a module and calling requirejs.optimize().
I took a similar approach in one of my projects - I made my build.js file an ERB template and created a Thor task that ran through my modules and ran r.js once for each one. But #jrburke's solution using node.js is cleaner.

How can I optimize this custom dojo 1.7.2 build

I am working on my first project which uses a dojo 1.7.2 component, and only need a vertical slider widget. I was able to create a custom build which is supposed to include only the modules needed for my stated dependencies. Using the following build profile, and the command C:\dojo-release-1.7.2-src\util\buildscripts>build -p profiles/km.admin.dashboard.profile.js -r the resulting release/dojo/dojo.js.uncompressed.js is 796kb, and the release/dojo/dojo.js is 236kb. Is there any way to exclude more unneeded modules to reduce the file size? For instance, I just opened the release/dojo/dojo.js.uncompressed.js and took a quick look, there is a dojo/json package, I am not using any json. How do I exclude it? Thank you.
dependencies = {
layers: [
{
name: 'dojo.js',
customBase: true,
dependencies: [
'dojo/dojo',
'dojo.aspect',
'dojo/selector/acme',
'dojo/cldr/nls/number',
'dijit.form.VerticalSlider',
'dijit.form.VerticalRule',
'dijit.form.VerticalRuleLabels'
]
}
],
staticHasFeatures: {
'dojo-trace-api':0,
'dojo-log-api':0,
'dojo-publish-privates':0,
'dojo-sync-loader':0,
'dojo-xhr-factory':0,
'dojo-test-sniff':0
},
prefixes: [
[ 'dijit', '../dijit' ],
[ 'dojox', '../dojox' ]
]
}
There are some approaches by which you can trim down dojo.js to a bare minimum and keep adding the modules within dojo.js that you really need.
See:
http://dojotoolkit.org/reference-guide/1.7/build/customBase.html
and also:
http://www.sitepen.com/blog/2008/07/01/dojo-in-6k/ (this is somewhat old and cutombase approach in the first link might work better)