CLI to YANG XML from local input - netconf

I have a huge cli conf file that I'd like to convert to YANG xml
....
system
host-name hostname.com
system-ip 8.8.8.8
site-id 1
organization-name "Organization name"
sp-organization-name "Organization name"
vbond vbond.net
....
!
There are utilities like netconf-console2 that will fetch the configs from a router and display them as YANG XML
netconf-console2 --host=192.168.0.2 -u admin -p PASSWORD --port 830 --get-config
I was wondering if I could use the same utility to convert the conf from a local input, something like:
netconf-console2 --file localConf.cf
Not sure if that is even possible or if there is an alternative to netconf-console2 for this.

There's no "YANG XML". There are YANG models (a lot of them), and you can think of sets of these YANG models as database schemas. Then there is some configuration data which is structured according to a set of some YANG models. This data can be serialized into JSON, or XML (and in future, to other formats as well). However, before you can do that, you have to know about that set of YANG models which the data corresponds to. What YANG models are you trying to use?

Related

Forge CSV Data Adapter - Reference Application Data Visualization

I hope you are all doing well. I've been recently trying to modify the AutoDesk reference application to allow me to create a heat-map with my own sensors data and my own model. However, I have been having difficulties. Has anyone done something similar before? Do you have to create two separate .env files or do you just change both the credentials for the FORGE_ID portion and the CSV portion in the same one?
Thank You, (I attached an example of what it looks like with only the CSV portion change.)
Changed CSV portion
The .env file must be unique as it is loaded by the dotenv package to add all these values as environment variable (NodeJS access these variables with process.env.[VARIABLE_NAME]).
To ensure your .env file is used by dotenv, you must set your ENV to local, if you look at /server/router/index.js, you will find these lines :
if (process.env.ENV == "local") {
require("dotenv").config({
path: __dirname + "/../.env",
});
}
As you are on MacOS you could try this command : export ENV=local && npm run dev to start your server in dev mode.
The CSV portion you showed should work, just put all these lines into your current .env file and you should be able to use your own model AND add your own sensors data.

Okapi Java properties file and XLIFF file

Is it possible to use Okapi to convert java properties files to XLIFF and to reconstruct java properties file from XLIFF file.
Yes, this is possible using the Properties Filter.
An example of doing this using Okapi Tikal would look like this:
tikal.sh -fc okf_properties -x sample.properties -nocopy
# translate the resulting sample.properties.xlf file
tikal.sh -fc okf_properties -m sample.properties.xlf
You can also use this with Rainbow as part of an extraction pipeline.

Finete State machine visualizer

I need an application that prints/visualizes input/output pairs during the FST runs. I mean, for each state of the fst, it needs to print out a tuple that contains input for that state and output of the state. Right now I can generate fst files that is compatible with foma,hfst and xfst fst tools. So, I guess the visualization tool I need should be enough to compatible with any of them. Is there anyone who knows such a tool ?
foma can produce dot format files that can be visualized by graphviz. On Debian/Ubuntu, install graphviz with
$ sudo apt-get install graphviz
foma can read att format files (produced with hfst-fst2txt for anything HFST can read, or lt-print for anything from lttoolbox); assuming you've got such a file named myfst.att, you can do
$ foma
foma[0]: read att myfst.att
foma[1]: view
to display the full FST. That will show each input/output pair on each edge between states of the FST.
But you say "during runs" – are you talking about also showing the queue of "live states"? If so, I don't know of a tool that does this, that would be nice! One thing you could do is to modify the HFST source to output the list of live states and string vectors as it's processing, and then combine that with the dot file to e.g. colour in the live states. (If so, you may want to take this to the #hfst channel on irc.freenode.net.)
There is also a script att2dot.py on https://ftyers.github.io/2017-%D0%9A%D0%9B_%D0%9C%D0%9A%D0%9B/hfst.html that can be used on the command line like
hfst-fst2txt chv.lexc.hfst | python3 att2dot.py | dot -Tpng -ochv.lexc.png if you prefer something more scriptable. If you use that from the Python library of HFST, you might be able to get the "live states" for every part of an analysis more easily.

Django - cannot get loaddata to work

I have been using a Django database for a while without any major issues. Today I needed to set default values for a few tables for the first time. I created a fixtures directory in the top-level Django directory. Then I created the files for the default values. However, I keep getting error messages and I'm not sure why.
First I tried to use .sql files. It is worth noting that these tables are very simple; they only have one value, "name". My SQL file looked like this:
INSERT INTO MyTable (name) VALUES ('Default');
I saved this as MyTable.sql. When I ran the command python manage.py loaddata fixtures/MyTable.sql, I got this error message:
CommandError: Problem installing fixture 'MyTable': sql is not a known serialization format.
(Note: I also tried without the fixtures/ part, for the above example and the next, and got identical results).
I asked my project lead and he said he doesn't think SQL files can be used for this. So, I tried JSON files. My MyTable.json looked like this:
[
{
"model": "mydatabase.MyTable",
"pk": 1,
"fields": {
"name": "Default"
}
}
]
I'll be very upfront to admit I've never worked with JSON in this context before, only in web development, so I don't know if the issue may be something I'm doing wrong here. I tried to base it on the formatting I found here. When I ran this through the loaddata function again, I got this error message:
C:\Python27\lib\site-packages\django-1.6.1-py2.7.egg\django\core\management\commands\loaddata.py:216: UserWarning: No fixture named 'fixtures/MyTable' found.
This is my first time doing this and I've had a bit of a hard time finding documentation to figure out what I'm doing wrong. Could anyone please offer advice? Thanks!
As for me, the matter is the suffix of the output file.
python manage.py dumpdata -o my_dump
python manage.py loaddata < my_dump # it fails
# change the file name my_dump to my_dump.json
python manage.py loaddata < my_dump.json # it works
So, I guess the dumpdata implicitly use json as output format. While loaddata needs a format tip from file name suffix.
We can do this like below
models.py
from django import models
class Model_Name(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
fixures
[
{
"model": "app_name.model_name",
"pk": 1,
"fields": {
"name": "My app name"
}
}
]
command
./manage.py loaddata app_name/fixtures/model_name.json
Output: Installed 1 object(s) from 1 fixture(s)
To create your fixture files start with an empty database and then add some data to your database using the djanog-admin or the django shell or even pure SQL. After that you can do a
python manage.py dumpdata # to dump all your data, or
python manage.py dumpdata app_name # to dump all data of a specific app, or
python manage.py dumpdata app_name.model_name # to dump all data of a specific model
The above will print data to your stdout. In order to write it to a file use a redirect (>), for instance
python manage.py dumpdata auth.User > user_fixture.json
Update: I just saw that you are using Windows -- remember to load your fixtures using a backslash (\).
For those who still having this issues.
The error is telling you that:
CommandError: Problem installing fixture 'MyTable': sql is not a known serialization format.
You could look on it as:
sql is not a known serialization format.
It will not accept any file not ending on .json.
So to solve your problem use this command:
$ python manage.py loaddata <your_dump_filename>.json

How to get a list of files modified since date/revision in Accurev

I have created a workspace backed by some collaboration stream. The stream is updated regularly by team members. My goal is to take modified files in a given path and put them to another repository (do it regularly).
The question is how to create a list of files which were modified since a revision or date or ..? (I don't know which approach is the best.) The command line is preferable.
Once I get the file list I create an automating script to take the files from one place and put them to another.
accurev hist -s Your_Stream -t "2013/05/16 01:00:00"-now -a -fl
You can run accurev stat -m -fx and then parse resulting XML. element elements will have modTime attribute, which is the UNIX timestamp when the file was modified.