I am creating a new vscode extension, and I need to extend the standard usage of the jsonValidation system already present in vscode.
Note : I am talking about the system defined in package.json :
"contributes" : {
"languages": [
{
"id" : "yml",
"filenamePatterns": ["module.service"]
},
{
"id" : "json",
"filenamePatterns": ["module.*"]
}
],
"jsonValidation": [
{
"fileMatch": "module.test",
"url": "./resources/test.schema"
}
]
}
Now, I need to create a dynamic mapping, where the json fields filematch/url are defined from some internal rules (like version and other internal stuff). The standard usage is static : one fileMatch -> one schema.
I want for example to read the version from the json file to validate, and set the schema after that :
{
"version" : "1.1"
}
validation schema must be test-schema.1.1 instead of test-schema.1.0
note : The question is only about the modification of the configuration provided by package.json from the extensions.ts
Thanks for the support
** EDIT since the previous solution was not working in all cases
There is one solution to modify the package.json at the activating of the function.
export function activate(context: vscode.ExtensionContext) {
const myPlugin = vscode.extensions.getExtension("your.plugin.id");
if (!myPlugin)
{
throw new Error("Composer plugin is not found...")
}
// Get the current workspace path to found the schema later.
const folderPath = vscode.workspace.workspaceFolders;
if (!folderPath)
{
return;
}
const baseUri : vscode.Uri = folderPath[0].uri;
let packageJSON = myPlugin.packageJSON;
if (packageJSON && packageJSON.contributes && packageJSON.contributes.jsonValidation)
{
let jsonValidation = packageJSON.contributes.jsonValidation;
const schemaUri : vscode.Uri = vscode.Uri.joinPath(baseUri, "/schema/value-0.3.0.json-schema");
const schema = new JsonSchemaMatch("value.ospp", schemaUri)
jsonValidation.push(schema);
}
}
And the json schema class
class JsonSchemaMatch
{
fileMatch: string;
url : string;
constructor(fileMatch : string, url: vscode.Uri)
{
this.fileMatch = fileMatch;
this.url = url.path;
}
}
Another important information is the loading of the element of contributes is not reread after modification, for example
class Language
{
id: string;
filenamePatterns : string[];
constructor(id : string, filenamePatterns: string[])
{
this.id = id;
this.filenamePatterns = filenamePatterns;
}
}
if (packageJSON && packageJSON.contributes && packageJSON.contributes.languages)
{
let languages : Language[] = packageJSON.contributes.languages;
for (let language of languages) {
if (language.id == "json") {
language.filenamePatterns.push("test.my-json-type")
}
}
}
This change has no effect, since the loading of file association is already done (I have not dig for the reason, but I think this is the case)
In this case, creating a settings.json in the workspace directory can do the job:
settings.json
{
"files.associations": {
"target.snmp": "json",
"stack.cfg": "json"
}
}
Be aware that the settings.json can be created by the user with legitimate reason, so don't override it, just fill it.
Related
I am using solhint plugin for linting solidity code. But I want to add custom rules for the code analysis. How to add custom rules as part of the ruleset ?
Code added for custom rule:
const BaseChecker = require('./../base-checker')
const ruleId = 'no-foos'
const meta = {
type: 'naming',
docs: {
description: `Don't use Foo for Contract name`,
category: 'Style Guide Rules'
},
isDefault: false,
recommended: true,
defaultSetup: 'warn',
schema: null
}
class NoFoosAllowed extends BaseChecker {
constructor(reporter) {
super(reporter, ruleId, meta)
}
ContractDefinition(ctx) {
const { name } = ctx
if (name === 'Foo') {
this.reporter.error(ctx, this.ruleId, 'Contracts cannot be named "Foo"')
}
}
}
module.exports = NoFoosAllowed
I have saved the above code into a new js file inside rules->naming folder. And i have used the 'no-foos' rule id inside my .solhint.json file inside the rules property.
{
"extends": "solhint:all",
"plugins": [],
"rules": {
"avoid-suicide": "error",
"avoid-sha3": "warn",
"no-foos" : "warn",
"var-name-mixedcase": "error"
}
}
Each ruleset loops through all rules and enables (or doesn't enable) it based on the rule metadata and the ruleset config.
So you can create a custom rule in the rules folder and set it a combination of metadata that your ruleset will enable.
In for example Swift/iOS development, it's possible to differentiate builds for different environments with "flags" such as:
#if STAGING
// one set of logic here
#endif
#if PRODUCTION
// another set of logic here
#endif
Is it possible to achieve the same with a Vue.js project, and how would we go about doing it? I am aware of makes different routes conditionally available for different roles (which is also quite neat), but I am optimally looking for the option to differentiate on a source code level.
Hope someone has some great insights! It could include:
How to exclude parts of a file (such as the #if STAGING above) from a build target
How to exclude entire files from a build target
etc.
you have the ability to use this syntax
if(process.env.NODE_ENV === 'production') {
console.log("this is the prod env!!!!!!!!!!");
config.output.path = path.resolve(__dirname, "dist");
}
make sure that when you run the script with the correct env's for each environment (local, dev, staging, prod etc ..) :D
just change the vue-loader output.
the source code
<template v-if="process.env.NODE_ENV === 'development'">
development only
</template>
default output
var render = function() {
var _vm = this
var _h = _vm.$createElement
var _c = _vm._self._c || _h
return _c(
"div",
{ attrs: { id: "app" } },
[
_vm.process.env.NODE_ENV === "development"
? [_vm._v(" development only ")]
: _vm._e(),
_c("router-view")
],
2
)
}
just use regex to replace _vm.process.env. by process.env is ok.
// webpack.config.js
module: {
rules: [{
// must set post
enforce: 'post',
test: /\.vue$/,
use: [{
loader: './myLoader'
}]
}]
}
// myLoader.js
module.exports = function (source, map) {
if (source.indexOf('_vm.process.env') > -1) {
source = source.replace(/_vm.process.env/g, 'process.env')
}
this.callback(
null,
source,
map
)
}
Final the vue-loader result change
var render = function() {
var _vm = this
var _h = _vm.$createElement
var _c = _vm._self._c || _h
return _c(
"div",
{ attrs: { id: "app" } },
[
// change to true
true
? [_vm._v(" development only ")]
: undefined,
_c("router-view")
],
2
)
}
I want to get the schema from the server.
I can get all entities with the types but I'm unable to get the properties.
Getting all types:
query {
__schema {
queryType {
fields {
name
type {
kind
ofType {
kind
name
}
}
}
}
}
}
How to get the properties for type:
__type(name: "Person") {
kind
name
fields {
name
type {
kind
name
description
}
}
}
How can I get all types with the properties in only 1 request? Or ever better: How can I get the whole schema with the mutators, enums, types ...
Update
Using graphql-cli is now the recommended workflow to get and update your schema.
The following commands will get you started:
# install via NPM
npm install -g graphql-cli
# Setup your .graphqlconfig file (configure endpoints + schema path)
graphql init
# Download the schema from the server
graphql get-schema
You can even listen for schema changes and continuously update your schema by running:
graphql get-schema --watch
In case you just want to download the GraphQL schema, use the following approach:
The easiest way to get a GraphQL schema is using the CLI tool get-graphql-schema.
You can install it via NPM:
npm install -g get-graphql-schema
There are two ways to get your schema. 1) GraphQL IDL format or 2) JSON introspection query format.
GraphQL IDL format
get-graphql-schema ENDPOINT_URL > schema.graphql
JSON introspection format
get-graphql-schema ENDPOINT_URL --json > schema.json
or
get-graphql-schema ENDPOINT_URL -j > schema.json
For more information you can refer to the following tutorial: How to download the GraphQL IDL Schema
This is the query that GraphiQL uses (network capture):
query IntrospectionQuery {
__schema {
queryType {
name
}
mutationType {
name
}
subscriptionType {
name
}
types {
...FullType
}
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type {
...TypeRef
}
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
You can use GraphQL-JS's introspection query to get everything you'd like to know about the schema:
import { introspectionQuery } from 'graphql';
If you want just the information for types, you can use this:
{
__schema: {
types: {
...fullType
}
}
}
Which uses the following fragment from the introspection query:
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type { ...TypeRef }
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
`;
If that seems complicated, it's because fields can be arbitrarility deeply wrapped in nonNulls and Lists, which means that technically even the query above does not reflect the full schema if your fields are wrapped in more than 7 layers (which probably isn't the case).
You can see the source code for introspectionQuery here.
Using apollo cli:
npx apollo schema:download --endpoint=http://localhost:4000/graphql schema.json
Update
After getting sick of modifying my previous script all the time, I caved and made my own CLI tool gql-sdl. I still can't find a different tool that can download GraphQL SDL with zero config but would love for one to exist.
Basic usage:
$ gql-sdl https://api.github.com/graphql -H "Authorization: Bearer ghp_[redacted]"
directive #requiredCapabilities(requiredCapabilities: [String!]) on OBJECT | SCALAR | ARGUMENT_DEFINITION | INTERFACE | INPUT_OBJECT | FIELD_DEFINITION | ENUM | ENUM_VALUE | UNION | INPUT_FIELD_DEFINITION
"""Autogenerated input type of AbortQueuedMigrations"""
input AbortQueuedMigrationsInput {
"""The ID of the organization that is running the migrations."""
ownerId: ID!
"""A unique identifier for the client performing the mutation."""
clientMutationId: String
}
...
The header argument -H is technically optional but most GraphQL APIs require authentication via headers. You can also download the JSON response instead (--json) but that's a use case already well served by other tools.
Under the hood this still uses the introspection query provided by GraphQL.js, so if you're looking to incorporate this functionality into your own code see the example below.
Previous answer
Somehow I wasn't able to get any of the suggested CLI tools to output the schema in GraphQL's Schema Definition Language (SDL) instead of the introspection result JSON. I ended up throwing together a really quick Node script to make the GraphQL library do it for me:
const fs = require("fs");
const { buildClientSchema, getIntrospectionQuery, printSchema } = require("graphql");
const fetch = require("node-fetch");
async function saveSchema(endpoint, filename) {
const response = await fetch(endpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ query: getIntrospectionQuery() })
});
const graphqlSchemaObj = buildClientSchema((await response.json()).data);
const sdlString = printSchema(graphqlSchemaObj);
fs.writeFileSync(filename, sdlString);
}
saveSchema("https://example.com/graphql", "schema.graphql");
getIntrospectionQuery() has the complete introspection query you need to get everything, and then buildClientSchema() and printSchema() turns the JSON mess into GraphQL SDL.
Wouldn't be too difficult to make this into a CLI tool itself but that feels like overkill.
You can use the Hasura's graphqurl utility
npm install -g graphqurl
gq <endpoint> --introspect > schema.graphql
# or if you want it in json
gq <endpoint> --introspect --format json > schema.json
Full documentation: https://github.com/hasura/graphqurl
You can download a remote GraphQL server's schema with the following command. When the command succeeds, you should see a new file named schema.json in the current working directory.
~$ npx apollo-cli download-schema $GRAPHQL_URL --output schema.json
You can use GraphQL-Codegen with the ast-plugin
npm install --save graphql
npm install --save-dev #graphql-codegen/cli
npx graphql-codegen init
Follow the steps to generate the codegen.yml file
Once the tool is installed, you can use the plugin to download the schema which is schema-ast
The best is to follow the instruction on the page to install it… but basically:
npm install --save-dev #graphql-codegen/schema-ast
Then configure the codegen.yml file to set which schema(s) is/are the source of truth and where to put the downloaded schema(s) file:
schema:
- 'http://localhost:3000/graphql'
generates:
path/to/file.graphql:
plugins:
- schema-ast
config:
includeDirectives: true
I was also looking and came across this Medium article on GraphQL
The below query returned many details regarding schema, queries and their input & output params type.
fragment FullType on __Type {
kind
name
fields(includeDeprecated: true) {
name
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
type {
...TypeRef
}
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
query IntrospectionQuery {
__schema {
queryType {
name
}
mutationType {
name
}
types {
...FullType
}
directives {
name
locations
args {
...InputValue
}
}
}
}
You can use IntelliJ plugin JS GraphQL then IDEA will ask you create two files "graphql.config.json" and "graphql.schema.json"
Then you can edit "graphql.config.json" to point to your local or remote GraphQL server:
"schema": {
"README_request" : "To request the schema from a url instead, remove the 'file' JSON property above (and optionally delete the default graphql.schema.json file).",
"request": {
"url" : "http://localhost:4000",
"method" : "POST",
"README_postIntrospectionQuery" : "Whether to POST an introspectionQuery to the url. If the url always returns the schema JSON, set to false and consider using GET",
"postIntrospectionQuery" : true,
"README_options" : "See the 'Options' section at https://github.com/then/then-request",
"options" : {
"headers": {
"user-agent" : "JS GraphQL"
}
}
}
After that IDEA plugin will auto load schema from GraphQL server and show the schema json in the console like this:
Loaded schema from 'http://localhost:4000': {"data":{"__schema":{"queryType":{"name":"Query"},"mutationType":{"name":"Mutation"},"subscriptionType":null,"types":[{"kind":"OBJECT","name":"Query","description":"","fields":[{"name":"launche
Refer to https://stackoverflow.com/a/42010467/10189759
Would like to point out that if authentications are needed, that you probably cannot just use the config file generated from graphql init
You might have to do something like this, for example, using the github graphql API
{
"projects": {
"graphqlProjectTestingGraphql": {
"schemaPath": "schema.graphql",
"extensions": {
"endpoints": {
"dev": {
"url": "https://api.github.com/graphql",
"headers": {
"Authorization": "Bearer <Your token here>"
}
}
}
}
}
}
}
If you want to do it by your self, read these code:
There is a modular state-of-art tool 「graphql-cli」, consider looking at it. It uses package 「graphql」's buildClientSchema to build IDL .graphql file from introspection data.
graphql-cli get-schema :integrated into graphql-cli part 1
graphql-config EndpointsExtension :integrated into graphql-cli part 2
The graphql npm package's IntrospectionQuery does
query IntrospectionQuery {
__schema {
queryType {
name
}
mutationType {
name
}
subscriptionType {
name
}
types {
...FullType
}
directives {
name
description
locations
args {
...InputValue
}
}
}
}
fragment FullType on __Type {
kind
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
type {
...TypeRef
}
isDeprecated
deprecationReason
}
inputFields {
...InputValue
}
interfaces {
...TypeRef
}
enumValues(includeDeprecated: true) {
name
description
isDeprecated
deprecationReason
}
possibleTypes {
...TypeRef
}
}
fragment InputValue on __InputValue {
name
description
type {
...TypeRef
}
defaultValue
}
fragment TypeRef on __Type {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
ofType {
kind
name
}
}
}
}
}
}
}
}
source
You could use apollo codegen:client. See https://github.com/apollographql/apollo-tooling#apollo-clientcodegen-output
runs in iOs & Android
coffeeScript
I have a model such as:
exports.definition =
config:
columns:
cookie: "string"
defaults:
cookie: ""
adapter:
# is this valid?
type: "sql"
collection_name: "userInfo"
extendModel: (Model) ->
_.extend Model::,
isSignedIn:->
this.get('cookie').length > 0
Model
And a index.xml:
<Alloy>
<Model id="userInfo" src="userInfo" instance="true"/>
So, this userInfo properties change during the lifecycle of the app, the user logs in, and I want to keep that cookie being persisted as well as auto-loaded on app init.
How do I do that in this framework?
UPDATE another Q&A
For reference here: http://developer.appcelerator.com/question/147601/alloy---persist-and-load-a-singleton-model#255723
They don't explain it well in the appcelerator docs, but if you want to store and retreive properties using build-in alloy properties sync adapter you have to specify a unique "id" when using models. You did it already in the xml markup: <Model id="userInfo" but that will work for that view file only. If you want to access/update this property in the controller you do this:
var UserInfo = Alloy.createModel("userInfo", {id: "userInfo"});
UserInfo.fetch();
UserInfo.set("cookie", "new value");
UserInfo.save();
If you want to keep the reference to this property thruout the code, I believe, you just attach it to the global namespace in the alloy.js:
var UserInfo = Alloy.createModel("userInfo", {id: "userInfo"});
UserInfo.fetch();
Alloy.Globals.UserInfo = UserInfo;
In the controllers you do:
var UserInfo = Alloy.Globals.UserInfo;
Put your model userInfo.js into app/model, it will probably look like this:
exports.definition = {
config : {
"columns" : {
"cookie" : "string"
},
"defaults" : { "cookie" : "" }
"adapter" : {
"type" : "sql",
"collection_name" : "userInfo"
}
},
extendModel : function(Model) {
_.extend(Model.prototype, {
isSignedIn : function() {
this.get('cookie').length > 0
}
});
return Model;
},
extendCollection : function(Collection) {
_.extend(Collection.prototype, {
});
return Collection;
}
}
From here it depends on what you want to do, but you can easily fetch the model from the collection userInfo, just put this: <Collection src="userInfo"/> in your xml file.
As a side note, I usually just use the Titanium.App.Properties stuff to store user information. Properties are used for storing application-related data in property/value pairs that persist beyond application sessions and device power cycles. For example:
// Returns the object if it exists, or null if it does not
var lastLoginUserInfo = Ti.App.Properties.getObject('userInfo', null);
if(lastLoginUserInfo === null) {
var userInfo = {cookie : "Whatever the cookie is", id : "123456789"};
Ti.App.Properties.setObject('userInfo', userInfo);
} else {
// Show the cookie value of user info
alert(lastLoginUserInfo.cookie);
}
I am very new to Dojo (1.7), and I am very excited by the AMD loader and the global philosophy, then thought I have red some dozen of documentation and googled a lot and my brains starts to grill, I am still unable to understand and perform some things : I would like to display a dijit.Tree of any sort of JSON, yes like a JSON editor, because I use also persistent JSON files for storing few datas (not only for GET/.../ transmission) . Here are my expects :
sample JSON : {"infos":{"address":"my address","phone":"my
phone"},"insurance":{"forks":[14,53,123],"prices":[5,8,"3%"]}}
display the differents variables of any JSON : the root child is the
root json variable, children L1 are the root variables, etc...and upon the json variable type (String, Number, Object, Array) I will also display a corresponding icon
not to have to parse the whole json and format it in one big time, would like for exemple to display first the root node, then the well formated children trought a getChildren method for example, so it is done progressively on expando (like a lazy load). I have already made my own Trees classes with javascript, the more flexible way was I gave a dataRoot, a renderItem(dataItem, domItem) and a getChildren(dataItem) to the constructor so I could perform and return all I want, the Tree only performed the rendering only when needed, the Tree had no knowing about datas structure neither modify it, but I am not sure to understand well why the dijit.Tree needs a so restrictive way of build...
Here is my last try, it might totally not the right way, (maybe I have to subclass) but as far as I understand, I need to play with 3 classes (dojo store, tree model and tree widget), but firstly it seems the model can't get the root node, please check my different code comments. So please is there any patient person that can give me a simple example with some clear explanations (yeah I am a bit demanding), at least the list of the right necessary variables for constructor's options I need for start displaying a nice tree view of my json file, there's so much I'm totally lost, many thanks !
...
// before there is the AMD part that load the needed things
Xhr.get({ url:'data/file.json', handleAs:'json',
load: function(data){
console.log('xhr.loaded : ', data);// got my javascript object from the json string
var store = new ItemFileReadStore({// is it the right store I need ??
// or the Memory store ?
// assuming later I'll need to save the data changes
rootId : 'root',//
rootLabel : 'Archive',// useless ? isn't it the model responsability ?
data : {id:'root', items:[data]}// trying to give a root node well formatted
});
var model = new TreeStoreModel({
store : store,
getChildren : function(obj){
// firstly here it seems the root is not found
// I got a 'error loading root' error
// what is missing in my instanciations ??
// what is exactyly the type of the 1st arg : a store ?
console.log('getChildren : ', this.get(obj.id));
},
mayHaveChildren : function(){
console.log('mayHaveChildren ', arguments);
return true;
}
});
var tree = new Tree({
model: model
}, domId);
tree.startup();
}
});
My solution is based on dojo/store/Memory inspired by Connecting a Store to a Tree:
You can find live demo at http://egoworx.com/ or download complete source from dropbox.
Now code. First dojo/store/Memory:
var data = {"infos":{"address":"my address","phone":"my phone", "gift": false, "now": new Date()},"insurance":{"forks":[14,53,123],"prices":[5,8,"3%"]}};
var store = new Memory({
data: data,
mayHaveChildren: function(object) {
var type = this.getType(object);
return (type == "Object" || type == "Array");
},
getChildren: function(object, onComplete, onError) {
var item = this.getData(object);
var type = this.getType(object);
var children = [];
switch(type) {
case "Array":
children = item;
break;
case "Object":
for (i in item) {
children.push({label: i, data: item[i]});
}
break;
}
onComplete(children);
},
getRoot: function(onItem, onError) {
onItem(this.data);
},
getLabel: function(object) {
var label = object.label || object + "";
var type = this.getType(object);
switch(type) {
case "Number":
case "String":
case "Boolean":
case "Date":
var data = this.getData(object);
if (data != label) {
label += ": " + this.getData(object);
}
}
return label;
},
getData: function(object) {
if (object && (object.data || object.data === false) && object.label) {
return object.data;
}
return object;
},
getType: function(object) {
var item = this.getData(object);
if (lang.isObject(item)) {
if (lang.isArray(item)) return "Array";
if (lang.isFunction(item)) return "Function";
if (item instanceof Date) return "Date";
return "Object";
}
if (lang.isString(item)) return "String";
if (item === true || item === false) return "Boolean";
return "Number";
},
getIconClass: function(object, opened) {
return this.getType(object);
}
});
Please note I added a boolean and Date type to your data.
dijit/Tree based on this store:
var tree = new Tree({
model: store,
persist: false,
showRoot: false,
getIconClass: function(object, opened) {
if (lang.isFunction(this.model.getIconClass)) {
return this.model.getIconClass(object, opened);
}
return (!item || this.model.mayHaveChildren(item)) ? (opened ? "dijitFolderOpened" : "dijitFolderClosed") : "dijitLeaf";
}
}, "placeholder");
tree.startup();
And finally a stylesheet to display data type icons:
.dijitTreeIcon {
width: 16px;
height: 16px;
}
.Object {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/object.png);
}
.Array {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/array.png);
}
.Date {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/date.png);
}
.Boolean {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/boolean.png);
}
.String {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/string.png);
}
.Number {
background-image: url(http://dojotoolkit.org/api/css/icons/16x16/number.png);
}
I cannot access jsFiddle since I'm currently in China, but I'll put the code above there upon my return to Europe and post a link here.
Try somethign like that instead :
store = new dojo.data.ItemFileWriteStore({
url : "",
data: {
identifier: "id",
label : "label",
items : [{
id : "root",
label : "root",
type : "root",
children: [data]
}]
}
});
Also in general avoid overriding the tree functions, you might extend them, but becareful.
If you want to console.log, then rather connect to them...
ItemFileReadStore is a read-only store, so not the one you want for "saving modifications".
You can try the ItemFileWriteStore, or JsonRest, etc.