Appcelerator - config.adapter.idAttribute "id" not found in list of colums for table iOS 11 - titanium

I'm at my wits end. I have an Appcelerator application for iOS and Android. It uses Alloy models. And i can't get this to work on iOS 11.
When building and launching the app in the iPhone 7 Simulator in iOS 11 I get the following error when the app starts up.
[DEBUG] : Installing sql database "db.sql" with name "db"
[INFO] : [LiveView] Error Evaluating /alloy/models/Receive.js # Line: <null>
[ERROR] : config.adapter.idAttribute "id" not found in list of columns for table "receive"
[ERROR] : columns: []
[ERROR] : File: /alloy/models/Receive.js
[ERROR] : Line: <null>
[ERROR] : SourceId: <null>
[ERROR] : Backtrace:
[ERROR] : undefined
[INFO] : [LiveView] Error Evaluating app.js # Line: 240
[ERROR] : TypeError: undefined is not a constructor (evaluating 'new (require("/alloy/models/" + ucfirst(name)).Collection)(args)')
[ERROR] : File: app.js
[ERROR] : Line: 240
[ERROR] : SourceId: <null>
[ERROR] : Backtrace:
[ERROR] : undefined
[DEBUG] : Application booted in 835.013032 ms
[TRACE] : Uploaded tiapp metadata with Appcelerator Platform!
This only happens in iOS 11. Building for iOS 10.3 and Android works fine, and the app works perfectly.
My models/receive.js looks like this.
exports.definition = {
config: {
"columns": {
"id" : "INTEGER PRIMARY KEY AUTOINCREMENT",
"lev_naam" : "TEXT",
"product" : "TEXT",
"state" : "INTEGER",
"type" : "TEXT",
"subtype" : "TEXT",
"temp" : "TEXT",
"packing" : "INTEGER",
"freshness" : "INTEGER",
"tht" : "INTEGER",
"description" : "TEXT",
"check_date" : "INTEGER",
"check_week" : "INTEGER",
"check_year" : "INTEGER",
"creation_date" : "INTEGER",
"update_date" : "INTEGER",
"username" : "TEXT",
"location" : "TEXT",
"db_id" : "INTEGER",
"issend" : "INTEGER",
"isdeleted" : "INTEGER",
"uid" : "TEXT",
},
"defaults": {
"lev_naam" : "",
"product" : "",
"state" : -1,
"type" : "",
"subtype" : "",
"temp" : null,
"packing" : null,
"freshness" : null,
"tht" : null,
"description" : "",
"check_date" : null,
"check_week" : null,
"check_year" : null,
"creation_date" : 0,
"update_date" : 0,
"username" : null,
"isdeleted" : null,
"issend" : null,
"db_id" : null,
"uid" : null,
},
"adapter": {
"db_file" : 'db.sql',
"type": "sql",
"collection_name": "receive",
"idAttribute": "id",
}
},
(.... rest of code)
When I remove the idAttribute key in the adapter object for all my models it build and runs in iOS 11, but my fetch queries fail for some reason, because i'm fetching items with their id.
I'm using the latest version of Appcelerator and I've updated the Titanium SDK to latest (6.2.2.GA).
The error is telling my that somehow the applications sees my column object as empty.... But it obviously isn't.
Hope you guys have the answer!

Related

How to use four_button.js plugin in Datatables 1.10

I'm start to use Datatables 1.10 versiĆ³n but i have problems to use this option "sPaginationType" : 'four_button' when initializing my datatables
In the version 1.9.4 , I used previously works correctly
My datatables version
https://cdn.datatables.net/1.10.19/js/jquery.dataTables.js
My four_button version
https://cdn.datatables.net/plug-ins/1.10.17/pagination/four_button.js
table = $('#tabla-informes').DataTable({
"sDom" : "",
"sPaginationType" : 'four_button',
"aoColumns" : aoColumns,
"aoColumnDefs" : [ {"bVisible" : false, "aTargets" : [ 0 ]} ],
"bServerSide" : true,
"sAjaxSource" : "/informacion/ajaxLista",
"fnServerParams": function ( aoData ) {
}) ....
Browser crash
https://imgur.com/YnNlXoj
Thanks, all

ELK - X-Pack Custom realm

I've developed a custom realm for my ELK cluster.
This module works well on a on node elasticsearch but when I install it on my production cluster, nothing works.
Elasticsearch starting logs :
- nothing special, everything seems to work and xpack module is loaded (generates a log in stdout)
Elasticsearch cluster diagnostic (custom seems to be disabled and not available) :
{
"security" : {
"available" : true,
"enabled" : true,
"realms" : {
"file" : {
"available" : true,
"enabled" : false
},
"ldap" : {
"available" : true,
"enabled" : false
},
"native" : {
"name" : [
"realm2"
],
"available" : true,
"size" : [
2
],
"enabled" : true,
"order" : [
1
]
},
"custom" : {
"available" : false,
"enabled" : false
},
...
}
Elasticsearch configuration :
cluster.name: "production-cluster-1"
network.host: 0.0.0.0
bootstrap.memory_lock: true
xpack.security.enabled: true
xpack.security.audit.enabled: true
xpack.monitoring.enabled: true
xpack.graph.enabled: false
xpack.watcher.enabled: false
xpack.ml.enabled: false
discovery.zen.ping.unicast.hosts: "-------------------"
network.publish_host: "----"
discovery.zen.minimum_master_nodes: 3
xpack.security:
authc:
realms:
realm1:
type: custom
order: 0
realm2:
type: native
order: 1
The native authentication works fine.
How can I troubleshoot this correctly ? :)
Thanks
Let's start by turning up the logging:
curl -u<user> -XPUT '<host>:<http-port>/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
"transient": {
"logger.org.elasticsearch.xpack.security.authc": "DEBUG"
}
}
'
That will give you any authentication logs on debug. I think the right package for the native realm should be logger.org.elasticsearch.xpack.security.authc.esnative in case you want to limit it down just to that.

Get Camunda TaskID after creation in response

We are using Camunda for our approval process implementation in our application.
We created a BPMN process with human Task service. We are using the below URL
engine-rest/engine/default/process-definition/key/processKey/start
we pass our form parameters as input to this service
{
"variables": {
"requestId" : {"value" : "xxxxx", "type" : "String"},
"catalog" : {"value" : "yyyy", "type" : "String"},
"businessReason": {"value":"yyyyy","type":"String"},
"link": {"value":"","type":"String"}
}
}
The response of this start task is below-
{
"links": [
{
"method": "GET",
"href": "http://localhost:8080/engine-rest/engine/default/process-instance/31701",
"rel": "self"
}
],
"id": "31701",
"definitionId": "xxxxx:7:31605",
"businessKey": null,
"caseInstanceId": null,
"ended": false,
"suspended": false,
"tenantId": null
}
The id in the response is not the actual task ID which we use to get the task details etc instead its the execution ID.
Is there a way to get the task id back in the response.? Also can we add some parameteres to the above response. Like
"status" : "success"
I am having listener class created for the Human task but not sure how to add response parameters . Any help is appreciated
This is not possible unless you build a custom REST resource on top of Camunda's Java API. See https://docs.camunda.org/manual/7.6/reference/rest/overview/embeddability/ for info how you would embed the default REST resources into a custom JAX-RS application.

Defining a queue with a config file in RabbitMQ

Is there a way to define a queue in a configuration file like in ActiveMQ :
http://activemq.apache.org/configure-startup-destinations.html
Yes, it is possible.
The easiest way:
Add queue manually, from webUI
By default webUI is exposed on port 15672.
Add queue accessing http://localhost:15672/#/queues
Export config file from webUI.
Access main page http://localhost:15672/#/. At the bottom you have section Import / export definitions, and button download broker definitions.
Just download the file and it will contain all defined queues.
Sample config file, with users, virtual host and queue:
I have formatted the file using JStool plugin, JSFormat option from Notepad++.
By default, file is single line and not very readable.
Next to 'download broker definitions' there is button 'upload broker definitions'. You may upload your file (it will work with pretty-formatted file).
{
"rabbit_version" : "3.5.7",
"users" : [{
"name" : "guest",
"password_hash" : "42234423423",
"tags" : "administrator"
}
],
"vhosts" : [ {
"name" : "/uat"
}
],
"permissions" : [{
"user" : "guest",
"vhost" : "/uat",
"configure" : ".*",
"write" : ".*",
"read" : ".*"
}
],
"parameters" : [],
"policies" : [],
"queues" : [{
"name" : "sms",
"vhost" : "/uat",
"durable" : false,
"auto_delete" : false,
"arguments" : {}
}
],
"exchanges" : [],
"bindings" : []
}

Pig AvroStorage + Unsupported type in record:class org.apache.pig.data.DataByteArray

I am trying to read csv files as input data and write the output in avro format.
Note :- Pig Version Apache Pig version 0.12.1.2.1.5.0-695
REGISTER /usr/lib/pig/lib/avro-1.7.4.jar;
REGISTER /usr/lib/pig/lib/piggybank.jar;
REGISTER /usr/lib/pig/lib/jackson-mapper-asl-1.8.8.jar;
REGISTER /usr/lib/pig/lib/jackson-core-asl-1.8.8.jar;
REGISTER /usr/lib/pig/lib/json-simple-1.1.1.jar;
A = LOAD '/data/raw/event';
store A into '/data/dev/raw/pig'
using org.apache.pig.piggybank.storage.avro.AvroStorage('no_schema_check',
'schema', ' {
"name" : "EVENT",
"type" : "record",
"fields" : [ {
"name" : "evt",
"type" : [ "long", "null" ]
}, {
"name" : "mac",
"type" : [ "int", "null" ]
}, {
"name" : "sec",
"type" : [ "int", "null" ]
} ]
}');
I get the below exception :
ERROR 2997: Unable to recreate exception from backed error: Error: org.apache.avro.file.DataFileWriter$AteException: java.lang.RuntimeException:
Unsupported type in record:class org.apache.pig.data.DataByteArray
at org.apache.avro.file.DataFileWriter.append(DataFileWriter.java:263)
at org.apache.pig.piggybank.storage.avro.PigAvroRecordWriter.write(PigAvroRecordWriter.java:49)
at org.apache.pig.piggybank.storage.avro.AvroStorage.putNext(AvroStorage.java:749)
Caused by: java.lang.RuntimeException: Unsupported type in record:class org.apache.pig.data.DataByteArray
at org.apache.pig.piggybank.storage.avro.PigAvroDatumWriter.getField(PigAvroDatumWriter.java:385)
at org.apache.pig.piggybank.storage.avro.PigAvroDatumWriter.writeRecord(PigAvroDatumWriter.java:363)
Please let me know If I have missed any thing or if any work around exists
By default Pig will load all the fields as DataByteArray.
So you have to load the data with schema as follows
A = LOAD '/data/raw/event' as (evt:long,mac,int,sec:int)