Django - cannot get loaddata to work - sql

I have been using a Django database for a while without any major issues. Today I needed to set default values for a few tables for the first time. I created a fixtures directory in the top-level Django directory. Then I created the files for the default values. However, I keep getting error messages and I'm not sure why.
First I tried to use .sql files. It is worth noting that these tables are very simple; they only have one value, "name". My SQL file looked like this:
INSERT INTO MyTable (name) VALUES ('Default');
I saved this as MyTable.sql. When I ran the command python manage.py loaddata fixtures/MyTable.sql, I got this error message:
CommandError: Problem installing fixture 'MyTable': sql is not a known serialization format.
(Note: I also tried without the fixtures/ part, for the above example and the next, and got identical results).
I asked my project lead and he said he doesn't think SQL files can be used for this. So, I tried JSON files. My MyTable.json looked like this:
[
{
"model": "mydatabase.MyTable",
"pk": 1,
"fields": {
"name": "Default"
}
}
]
I'll be very upfront to admit I've never worked with JSON in this context before, only in web development, so I don't know if the issue may be something I'm doing wrong here. I tried to base it on the formatting I found here. When I ran this through the loaddata function again, I got this error message:
C:\Python27\lib\site-packages\django-1.6.1-py2.7.egg\django\core\management\commands\loaddata.py:216: UserWarning: No fixture named 'fixtures/MyTable' found.
This is my first time doing this and I've had a bit of a hard time finding documentation to figure out what I'm doing wrong. Could anyone please offer advice? Thanks!

As for me, the matter is the suffix of the output file.
python manage.py dumpdata -o my_dump
python manage.py loaddata < my_dump # it fails
# change the file name my_dump to my_dump.json
python manage.py loaddata < my_dump.json # it works
So, I guess the dumpdata implicitly use json as output format. While loaddata needs a format tip from file name suffix.

We can do this like below
models.py
from django import models
class Model_Name(models.Model):
name = models.CharField(max_length=50)
def __str__(self):
return self.name
fixures
[
{
"model": "app_name.model_name",
"pk": 1,
"fields": {
"name": "My app name"
}
}
]
command
./manage.py loaddata app_name/fixtures/model_name.json
Output: Installed 1 object(s) from 1 fixture(s)

To create your fixture files start with an empty database and then add some data to your database using the djanog-admin or the django shell or even pure SQL. After that you can do a
python manage.py dumpdata # to dump all your data, or
python manage.py dumpdata app_name # to dump all data of a specific app, or
python manage.py dumpdata app_name.model_name # to dump all data of a specific model
The above will print data to your stdout. In order to write it to a file use a redirect (>), for instance
python manage.py dumpdata auth.User > user_fixture.json
Update: I just saw that you are using Windows -- remember to load your fixtures using a backslash (\).

For those who still having this issues.
The error is telling you that:
CommandError: Problem installing fixture 'MyTable': sql is not a known serialization format.
You could look on it as:
sql is not a known serialization format.
It will not accept any file not ending on .json.
So to solve your problem use this command:
$ python manage.py loaddata <your_dump_filename>.json

Related

Forge CSV Data Adapter - Reference Application Data Visualization

I hope you are all doing well. I've been recently trying to modify the AutoDesk reference application to allow me to create a heat-map with my own sensors data and my own model. However, I have been having difficulties. Has anyone done something similar before? Do you have to create two separate .env files or do you just change both the credentials for the FORGE_ID portion and the CSV portion in the same one?
Thank You, (I attached an example of what it looks like with only the CSV portion change.)
Changed CSV portion
The .env file must be unique as it is loaded by the dotenv package to add all these values as environment variable (NodeJS access these variables with process.env.[VARIABLE_NAME]).
To ensure your .env file is used by dotenv, you must set your ENV to local, if you look at /server/router/index.js, you will find these lines :
if (process.env.ENV == "local") {
require("dotenv").config({
path: __dirname + "/../.env",
});
}
As you are on MacOS you could try this command : export ENV=local && npm run dev to start your server in dev mode.
The CSV portion you showed should work, just put all these lines into your current .env file and you should be able to use your own model AND add your own sensors data.

How to write match results from .cypher into textfile via cypher shell (Windows)?

I want to write match results based on cypher code inside a cypher file via cypher-shell into a text file (I am trying to do this on Windows). The cypher file contains: :beginmatch(n) return n;:commit
I tried to execute:
type x.cypher | cypher-shell.bat -u user -p secret > output.txt I get no error. But at the end there is just an empty text file "output.txt" inside the bin folder. Testing the cypher code directly in the cypher-shell (without piping) works. Can anyone help, please?
consider using the apoc library
that you can export to different formats, maybe it can help you.
Export to CSV
Export to JSON
Export to Cypher Script
Export to GraphML
Export to Gephi
https://neo4j.com/labs/apoc/xx/export/ --> xx your version Neo4j,example 4.0

How to use Bamboo plan variables in an inline script task?

When defining a Bamboo plan variable, the page has this.
For task configuration fields, use the syntax
${bamboo.myvariablename}. For inline scripts, variables are exposed as
shell environment variables which can be accessed using the syntax
$BAMBOO_MY_VARIABLE_NAME (Linux/Mac OS X) or %BAMBOO_MY_VARIABLE_NAME%
(Windows).
However, that doesn't work in my Linux inline script. For example, I have the following defined a a plan variable
name: my_plan_var value: some_string
My inline script is simply...
PLAN_VAR=$BAMBOO_MY_PLAN_VAR
echo "Plan var: $PLAN_VAR"
and I just get a blank string.
I've tried this
PLAN_VAR=${bamboo.my_plan_var}
But I get
${bamboo.my_plan_var}: bad substitution
on the log viewer window.
Any pointers?
I tried the following and it works:
On the plan, I set my_plan_var to "it works" (w/o quotes)
In the inline script (don't forget the first line):
#/bin/sh
PLAN_VAR=$bamboo_my_plan_var
echo "testing: $PLAN_VAR"
And I got the expected result:
testing: it works
I also wanted to create a Bamboo variable and the only thing I've found to share it between scripts is with inject-variables like following:
Add to your bamboo-spec.yaml the following after your script that will create the variable:
Build:
tasks:
- script: create-bamboo-var.sh
- inject-variables:
file: bamboo-specs/vars.yaml
scope: RESULT
# namespace: plan
- script: echo ${bamboo.inject.GIT_VERSION} # just for testing
Note: Namespace defaults to inject.
In create-bamboo-var.sh create the file bamboo-specs/vars.yaml:
#!bin/bash
versionStr=$(git describe --tags --always --dirty --abbrev=4)
echo "GIT_VERSION: ${versionStr}" > ./bamboo-specs/vars.yaml
Or for multiple lines you can use:
SW_NUMBER_DIGITS=${1} # Passed as first parameter to build script
cat <<EOT > ./bamboo-specs/vars.yaml
GIT_VERSION: ${versionStr}
SW_NUMBER_APP: ${SW_NUMBER_DIGITS}
EOT
Scope can be local or result. Local means it's only available for current job and result means it can be used in subsequent stages of this plan and releases that are created from the result.
Namespace is just used to avoid naming collisions with other variables.
With the above you can use that variable in later scripts with ${bamboo.inject.GIT_VERSION}. The last script task is just to see that it is working in other scripts. You can also see the variables in the web app as build meta data.
I'm using the above script before the build (in my case compiling C-Code) takes place so I can also create a version.h file that can be used by the source code.
This is still a bit cumbersome but I'm happy with it and I hope it will help others to configure Bamboo. Bamboo documentation could be better. (Still a lot try and error)

In Lua, how to print the console output into a file (piping) instead of using the standard output?

I workin' with Torch7 and Lua programming languages. I need a command that redirects the output of my console to a file, instead of printing it into my shell.
For example, in Linux, when you type:
$ ls > dir.txt
The system will print the output of the command "ls" to the file dir.txt, instead of printing it to the default output console.
I need a similar command for Lua. Does anyone know it?
[EDIT] An user suggests to me that this operation is called piping. So, the question should be: "How to make piping in Lua?"
[EDIT2] I would use this # command to do:
$ torch 'my_program' # printed_output.txt
Have a look here -> http://www.lua.org/pil/21.1.html
io.write seems to be what you are looking for.
Lua has no default function to create a file from the console output.
If your applications logs its output -which you're probably trying to do-, it will only be possible to do this by modifying the Lua C++ source code.
If your internal system has access to the output of the console, you could do something similar to this (and set it on a timer, so it runs every 25ms or so):
dumpoutput = function()
local file = io.write([path to file dump here], "w+")
for i, line in ipairs ([console output function]) do
file:write("\n"..line);
end
end
Note that the console output function has to store the output of the console in a table.
To clear the console at the end, just do os.execute( "cls" ).

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out