QuickBlox - ResourceBindingNotOfferedException - quickblox

when I try to download the avator using QBContent.downloadFileTask(), I'm getting this error. See below for the Log cat
06-22 23:23:11.149 25578-26385/com.connexe.chatapp D/QBASDK﹕ *********************************************************
*** RESPONSE *** 5f436d20-362a-4a53-a4b9-c25a0230a807 ***
STATUS : 200
HEADERS
Access-Control-Allow-Origin=*
Cache-Control=max-age=0, private, must-revalidate
Connection=keep-alive
Content-Type=application/json; charset=utf-8
Date=Mon, 22 Jun 2015 17:53:10 GMT
ETag="155319f260019f916ce5c54c84dfdb75"
QB-Token-ExpirationDate=2015-06-22 19:52:42 UTC
QuickBlox-REST-API-Version=0.1.1
Server=nginx/1.0.15
Status=200 OK
Transfer-Encoding=chunked
X-Rack-Cache=miss
X-Request-Id=ffdb8389df12e95ad2f3aff9663314d2
X-Runtime=0.020129
X-UA-Compatible=IE=Edge,chrome=1
BODY
'{"blob":{"blob_status":"complete","content_type":"image/jpeg","created_at":"2015-05-31T12:11:56Z","id":1178358,"last_read_access_ts":null,"lifetime":0,"name":"user_avator.jpg","public":true,"ref_count":1,"set_completed_at":"2015-05-31T12:12:03Z","size":38484,"uid":"228591978d194d58a461388a504b70dc00","updated_at":"2015-05-31T12:12:03Z"}}'
06-22 23:23:11.189 25578-26385/com.connexe.chatapp D/QBASDK﹕ =========================================================
=== REQUEST ==== 7658b488-e476-4eb4-9c13-a51b6230994e ===
REQUEST
GET https://api.quickblox.com/blobs/228591978d194d58a461388a504b70dc00.json
HEADERS
QuickBlox-REST-API-Version=0.1.1
QB-SDK=Android 2.2.5
QB-Token=e28eceb4de27db4c1ad4aff546febcbc9fe03eff
PARAMETERS
INLINE
GET https://api.quickblox.com/blobs/228591978d194d58a461388a504b70dc00.json
06-22 23:23:12.889 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x51a25000 size:6012928 offset:5963776
06-22 23:23:12.969 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Mapped buffer base:0x51a1f000 size:6012928 offset:5963776 fd:66
06-22 23:23:13.889 25578-26398/com.connexe.chatapp D/SMACK﹕ RCV (0):
06-22 23:23:13.899 25578-26398/com.connexe.chatapp D/SMACK﹕ SENT (0):
06-22 23:23:16.409 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x51a1f000 size:6012928 offset:5963776
06-22 23:23:16.519 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Mapped buffer base:0x51a1f000 size:3518464 offset:3444736 fd:68
06-22 23:23:17.369 25578-26398/com.connexe.chatapp D/SMACK﹕ RCV (0):
06-22 23:23:18.919 25578-26383/com.connexe.chatapp E/Login error1﹕ ResourceBindingNotOfferedException
06-22 23:23:18.939 25578-25578/com.connexe.chatapp E/Login error﹕ ResourceBindingNotOfferedException
06-22 23:23:18.959 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x554ab000 size:3260416 offset:3076096
06-22 23:23:18.959 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x56365000 size:3444736 offset:3260416
06-22 23:23:18.959 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x56c80000 size:3702784 offset:3518464
06-22 23:23:19.909 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x51a1f000 size:3518464 offset:3444736
06-22 23:23:19.949 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Mapped buffer base:0x51a1f000 size:3149824 offset:3076096 fd:68
06-22 23:23:20.449 25578-26398/com.connexe.chatapp D/SMACK﹕ RCV (0): zlib
06-22 23:23:23.410 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x51a1f000 size:3149824 offset:3076096
06-22 23:23:23.459 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Mapped buffer base:0x51a1f000 size:3190784 offset:3076096 fd:68
06-22 23:23:26.919 25578-25578/com.connexe.chatapp D/memalloc﹕ /dev/pmem: Unmapping buffer base:0x51a1f000 size:3190784 offset:3076096

Related

Vue.js Filter working locally, but not on server

I am fairly new to Vue and I know, that I am probably not following best practice on how to structure code. However, this is really just quick and dirty and I stumbled over a strange situation.
A piece of code to filter down an array (kind of live-search) works on my local MAMP server, but not on my NAS-based webserver.
This is the piece of code:
<script>
const app = new Vue({
el: '#recipe-app-start',
data: {
searchQuery: null,
recipes: [],
},
filters: {
recipeUrl(id) {
return 'recipe.php?id=' + id;
},
},
mounted() {
fetch(globalConfig.restServer + 'recipes')
.then(response => response.json())
.then((data) => {
this.recipes = data;
console.log(this.recipes);
})
},
computed: {
// Source: https://stackoverflow.com/questions/52558770/vuejs-search-filter
resultQuery() {
if (this.searchQuery) {
return this.recipes.filter((item) => {
return this.searchQuery.toLowerCase().split(' ').every(v => (item.title.toLowerCase().includes(v) || item.description.toLowerCase().includes(v)))
})
} else {
return this.recipes;
}
}
},
})
</script>
The only difference is the URL to the rest server (which is also changed to the same server).
The array initially populates, but when I try to type into the respective search field, locally it works and filters out irrelevant entries, while on the server it just raised an error in the console:
(found in <Root>)
warn # vue:634
logError # vue:1893
globalHandleError # vue:1888
handleError # vue:1848
Vue._render # vue:3553
updateComponent # vue:4067
get # vue:4478
run # vue:4553
flushSchedulerQueue # vue:4311
(anonymous) # vue:1989
flushCallbacks # vue:1915
Promise.then (async)
timerFunc # vue:1942
nextTick # vue:1999
queueWatcher # vue:4403
update # vue:4543
notify # vue:745
reactiveSetter # vue:1070
proxySetter # vue:4630
input # VM237:3
invokeWithErrorHandling # vue:1863
invoker # vue:2188
original._wrapper # vue:7547
vue:1897 TypeError: Cannot read property 'toLowerCase' of null
at (index):114
at Array.every (<anonymous>)
at (index):114
at Array.filter (<anonymous>)
at Vue.resultQuery ((index):113)
at Watcher.get (vue:4478)
at Watcher.evaluate (vue:4583)
at Proxy.computedGetter (vue:4832)
at Proxy.eval (eval at createFunction (vue:11649), <anonymous>:3:561)
at Vue._render (vue:3551)
logError # vue:1897
globalHandleError # vue:1888
handleError # vue:1848
Vue._render # vue:3553
updateComponent # vue:4067
get # vue:4478
run # vue:4553
flushSchedulerQueue # vue:4311
(anonymous) # vue:1989
flushCallbacks # vue:1915
Promise.then (async)
timerFunc # vue:1942
nextTick # vue:1999
queueWatcher # vue:4403
update # vue:4543
notify # vue:745
reactiveSetter # vue:1070
proxySetter # vue:4630
input # VM237:3
invokeWithErrorHandling # vue:1863
invoker # vue:2188
original._wrapper # vue:7547
vue:634 [Vue warn]: Error in render: "TypeError: Cannot read property 'toLowerCase' of null"
(found in <Root>)
warn # vue:634
logError # vue:1893
globalHandleError # vue:1888
handleError # vue:1848
Vue._render # vue:3553
updateComponent # vue:4067
get # vue:4478
run # vue:4553
flushSchedulerQueue # vue:4311
(anonymous) # vue:1989
flushCallbacks # vue:1915
Promise.then (async)
timerFunc # vue:1942
nextTick # vue:1999
queueWatcher # vue:4403
update # vue:4543
notify # vue:745
reactiveSetter # vue:1070
proxySetter # vue:4630
input # VM237:3
invokeWithErrorHandling # vue:1863
invoker # vue:2188
original._wrapper # vue:7547
vue:1897 TypeError: Cannot read property 'toLowerCase' of null
at (index):114
at Array.every (<anonymous>)
at (index):114
at Array.filter (<anonymous>)
at Vue.resultQuery ((index):113)
at Watcher.get (vue:4478)
at Watcher.evaluate (vue:4583)
at Proxy.computedGetter (vue:4832)
at Proxy.eval (eval at createFunction (vue:11649), <anonymous>:3:561)
at Vue._render (vue:3551)
Any ideas of the reason?
Do you fetch same data when you are locally and from NAS-based webserver ?
It seems that there is a problem when trying to use toLowerCase function. This is probably because of an item.title or item.description that returns null.
I am so stupid. I found the error: apparently I added one additional entry on the server v version. And that one indeed contained one row with a null "value" in description - that was causing the issue... Now I know, that I need to cater for this as well...

Apache Kafka with SSL is working but SSL errors against localhost in kafka log (drives me nuts)

i have a Kafka problem, that drives me nuts.
We have a 4 nodes cluster. We used to work without SSL in our development stage. --> no problem.
For release, we switched on SSL for both listeners. --> Everything is working fine (application + kafka manager CMAK + Monitoring)
But we get one an error in our kafka broker serverlog in all environements (test, release, prod). Something is polling, and i don´t know what it is, or where to look:
It starts with:
[2020-10-16 10:50:27,866] INFO AdminClientConfig values:
bootstrap.servers = [127.0.0.1:10092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
(org.apache.kafka.clients.admin.AdminClientConfig)
Then massive SSL error polling:
[2020-10-16 10:48:11,799] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2020-10-16 10:48:13,141] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2020-10-16 10:48:14,476] INFO [SocketServer brokerId=2] Failed authentication with /127.0.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
Then Timeout:
[2020-10-16 10:48:20,890] INFO [AdminClient clientId=adminclient-25] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
[2020-10-16 10:48:20,892] INFO [AdminClient clientId=adminclient-25] Metadata update failed (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
After 1-2 minutes, it starts again.
Our Broker config:
# Maintained by Ansible
zookeeper.connect=ZOOKEEPER1:2181,ZOOKEEPER2:2181,ZOOKEEPER3:2181
log.dirs=KAFKKALOGDIR
broker.id=2
confluent.license.topic.replication.factor=3
log.segment.bytes=1073741824
socket.receive.buffer.bytes=102400
socket.send.buffer.bytes=102400
offsets.topic.replication.factor=3
num.network.threads=8
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
confluent.support.metrics.enable=False
zookeeper.connection.timeout.ms=18000
num.io.threads=16
socket.request.max.bytes=104857600
log.retention.check.interval.ms=300000
group.initial.rebalance.delay.ms=0
confluent.metadata.topic.replication.factor=3
num.recovery.threads.per.data.dir=2
default.replication.factor=3
num.partitions=10
log.retention.hours=168
confluent.support.customer.id=anonymous
listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL
listeners=INTERNAL://:10091,EXTERNAL://:10092
advertised.listeners=INTERNAL://BROKERURL:10091,EXTERNAL://BROKERURL:10092
## Inter Broker Listener Configuration
inter.broker.listener.name=INTERNAL
listener.name.internal.ssl.truststore.location=LOCATION
listener.name.internal.ssl.truststore.password=PASSWORD
listener.name.internal.ssl.keystore.location=LOCATION
listener.name.internal.ssl.keystore.password=PASSWORD
listener.name.internal.ssl.key.password=PASSWORD
listener.name.external.ssl.truststore.location=LOCATION
listener.name.external.ssl.truststore.password=PASSWORD
listener.name.external.ssl.keystore.location=LOCATION
listener.name.external.ssl.keystore.password=PASSWORD
listener.name.external.ssl.key.password=PASSWORD
## Metrics Reporter Configuration
confluent.metrics.reporter.security.protocol=SSL
confluent.metrics.reporter.ssl.truststore.location=LOCATION
confluent.metrics.reporter.ssl.truststore.password=PASSWORD
What i did:
-disabled our monitoring agent (thought the agent is polling without SSL) --> nothing
-Add an additional localhost listener with PLAINTEXT 127.0.0.1 --> got massive problems with error "no matching leader for topic XY"
So, i don´t know how to continue - maybe someone has an idea
many thanks
Your AdminClientConfig specifies security.protocol=PLAINTEXT - that doesn't seem right given you want to enable SSL. https://kafka.apache.org/11/javadoc/org/apache/kafka/common/security/auth/SecurityProtocol.html shows the possible options for that variable.
You also have sasl.jaas.config=null, which I don't believe is correct either.
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/ provides a good walk-through of how to setup security with Kafka.
Edited to add (in response to followup question): AdminClient is a Java class which is instantiated from your connect-distributed.properties file. When you search your service logfile for AdminClient you will see something like this:
[2020-09-16 02:58:14,180] INFO AdminClientConfig values:
bootstrap.servers = [servernames-elided:9092]
client.dns.lookup = default
client.id =
connections.max.idle.ms = 300000
default.api.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 20000
retries = 2147483647
retry.backoff.ms = 500
sasl.client.callback.handler.class = null
sasl.jaas.config = [hidden]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = PLAIN
security.protocol = SASL_SSL
security.providers = null
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
(org.apache.kafka.clients.admin.AdminClientConfig:347)
Note that sasl.jaas.config = [hidden] - that's because the username and password for accessing the cluster are stored in the properties file directly, like so:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"someUsernameThisIs\" password=\"notMyRealPassword\";
Note that the escaping for the doublequotes is necessary for the configuration parser.

How to Rabbitmq Connection Close

I am using Pika 1.1.0, Python 3.7.4. There is consumer in the thread. I want to kill the thread and close the rabbitmq connection, but it fails. Where am I doing wrong? How can I do it?
Error: pika.exceptions.StreamLostError: Stream connection lost: IndexError('pop from an empty deque')
def BrokerConnection(username, password):
try:
credentials = pika.PlainCredentials(username,password)
parameters = pika.ConnectionParameters("127.0.0.1","5672","vhost",credentials, heartbeat=0)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
return connection, channel
except:
return None, None
def BrokerListen():
def BrokerConnect():
while True:
try:
queue = "test-que"
connection, channel = BrokerConnection("admin", "123345")
if channel is not None:
brokerConnections.update({"key1":channel})
return connection, channel
except:
time.sleep(1)
print("error")
connection, channel, queue = BrokerConnect()
print(f'[*] Waiting for {queue} messages. To exit press CTRL+C')
def callback(ch, method, properties, body):
data = json.loads(body)
print(data)
ch.basic_ack(delivery_tag=method.delivery_tag)
channel.basic_consume(queue=queue, on_message_callback=callback)
channel.start_consuming()
def ConnectionClose():
channel = brokerConnections["key1"]
if channel.is_open:
connection = channel.connection
channel.stop_consuming()
connection.close()
del brokerConnections["key1"]

ERROR: engine.cpp (370) - Cuda Error in ~ExecutionContext: 77

I do Int8 calibration using TensorRT.
Once calibration is completed and test the inference. I have error at stream.synchronize() in the following function.
No issue running on FP32 and FP16 engines. Only have error running at Int8 engine. What could be wrong?
def infer(engine, x, batch_size, context):
inputs = []
outputs = []
bindings = []
stream = cuda.Stream()
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * batch_size
dtype = trt.nptype(engine.get_binding_dtype(binding))
# Allocate host and device buffers
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
# Append the device buffer to device bindings.
bindings.append(int(device_mem))
# Append to the appropriate list.
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
#img = np.array(x).ravel()
im = np.array(x, dtype=np.float32, order='C')
im = im[:,:,::-1]
#im = im.transpose((2,0,1))
#np.copyto(inputs[0].host, x.flatten()) #1.0 - img / 255.0
np.copyto(inputs[0].host, im.flatten())
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
# Transfer predictions back from the GPU.
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
# Synchronize the stream
stream.synchronize()
# Return only the host outputs.
The following code has no error. Only engine.max_batch_size and batch_size are different.
def allocate_buffers(engine):
inputs = []
outputs = []
bindings = []
stream = cuda.Stream()
for binding in engine:
size = trt.volume(engine.get_binding_shape(binding)) * engine.max_batch_size
dtype = trt.nptype(engine.get_binding_dtype(binding))
# Allocate host and device buffers
host_mem = cuda.pagelocked_empty(size, dtype)
device_mem = cuda.mem_alloc(host_mem.nbytes)
# Append the device buffer to device bindings.
bindings.append(int(device_mem))
# Append to the appropriate list.
if engine.binding_is_input(binding):
inputs.append(HostDeviceMem(host_mem, device_mem))
else:
outputs.append(HostDeviceMem(host_mem, device_mem))
return inputs, outputs, bindings, stream
# This function is generalized for multiple inputs/outputs.
# inputs and outputs are expected to be lists of HostDeviceMem objects.
def do_inference(context, bindings, inputs, outputs, stream, batch_size=1):
# Transfer input data to the GPU.
[cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
# Run inference.
context.execute_async(batch_size=batch_size, bindings=bindings, stream_handle=stream.handle)
# Transfer predictions back from the GPU.
[cuda.memcpy_dtoh_async(out.host, out.device, stream) for out in outputs]
# Synchronize the stream
stream.synchronize()
# Return only the host outputs.
return [out.host for out in outputs]

Vuex Error, when sorting local Array

I am using Vuejs and Vuex for the UI of my app, this app has many files full of util and logic based functions.
In one of these files, I have a function, which returns a promises and does some array manipulations, as per the below:
function firePixels (clientId, result) {
return new Promise((resolve, reject) => {
const purposesZ = [1,2,3,4,5];
let purposeArrayZ = cmp.getPurposesAllowed();
purposeArrayZ.sort((a, b) => a - b);
if(isEqual(purposesZ,purposeArrayZ)){
console.log("equal!");
resolve(console.log("equal!"));
}
});
}
I have added the Z to test for unique names, but they should be local variables anyway.
My error from Vuex:
vue.runtime.esm.js:588 [Vue warn]: Error in callback for watcher "function () { return this._data.$$state }": "Error: [vuex] Do not mutate vuex store state outside mutation handlers."
(found in <Root>)
warn # vue.runtime.esm.js:588
logError # vue.runtime.esm.js:1732
globalHandleError # vue.runtime.esm.js:1727
handleError # vue.runtime.esm.js:1716
run # vue.runtime.esm.js:3230
update # vue.runtime.esm.js:3202
notify # vue.runtime.esm.js:694
mutator # vue.runtime.esm.js:841
(anonymous) # main.js:91
fireGtmPixels # main.js:83
(anonymous) # main.js:69
Promise.then (async)
(anonymous) # main.js:68
Promise.then (async)
(anonymous) # main.js:64
./client/main.js # cmp:263
__webpack_require__ # cmp:62
(anonymous) # cmp:179
(anonymous) # cmp:182
vue.runtime.esm.js:1736 Error: [vuex] Do not mutate vuex store state outside mutation handlers.
at assert (vuex.esm.js:105)
at Vue.store._vm.$watch.deep (vuex.esm.js:754)
at Watcher.run (vue.runtime.esm.js:3228)
at Watcher.update (vue.runtime.esm.js:3202)
at Dep.notify (vue.runtime.esm.js:694)
at Array.mutator (vue.runtime.esm.js:841)
at eval (main.js:91)
at new Promise (<anonymous>)
at fireGtmPixels (main.js:83)
at eval (main.js:69)
logError # vue.runtime.esm.js:1736
globalHandleError # vue.runtime.esm.js:1727
handleError # vue.runtime.esm.js:1716
run # vue.runtime.esm.js:3230
update # vue.runtime.esm.js:3202
notify # vue.runtime.esm.js:694
mutator # vue.runtime.esm.js:841
(anonymous) # main.js:91
fireGtmPixels # main.js:83
(anonymous) # main.js:69
Promise.then (async)
(anonymous) # main.js:68
Promise.then (async)
(anonymous) # main.js:64
./client/main.js # cmp:263
__webpack_require__ # cmp:62
(anonymous) # cmp:179
(anonymous) # cmp:182
vue.runtime.esm.js:588 [Vue warn]: Error in callback for watcher "function () { return this._data.$$state }": "Error: [vuex] Do not mutate vuex store state outside mutation handlers."
(found in <Root>)
warn # vue.runtime.esm.js:588
logError # vue.runtime.esm.js:1732
globalHandleError # vue.runtime.esm.js:1727
handleError # vue.runtime.esm.js:1716
run # vue.runtime.esm.js:3230
update # vue.runtime.esm.js:3202
notify # vue.runtime.esm.js:694
mutator # vue.runtime.esm.js:841
(anonymous) # main.js:94
fireGtmPixels # main.js:83
(anonymous) # main.js:69
Promise.then (async)
(anonymous) # main.js:68
Promise.then (async)
(anonymous) # main.js:64
./client/main.js # cmp:263
__webpack_require__ # cmp:62
(anonymous) # cmp:179
(anonymous) # cmp:182
vue.runtime.esm.js:1736 Error: [vuex] Do not mutate vuex store state outside mutation handlers.
at assert (vuex.esm.js:105)
at Vue.store._vm.$watch.deep (vuex.esm.js:754)
at Watcher.run (vue.runtime.esm.js:3228)
at Watcher.update (vue.runtime.esm.js:3202)
at Dep.notify (vue.runtime.esm.js:694)
at Array.mutator (vue.runtime.esm.js:841)
at eval (main.js:94)
at new Promise (<anonymous>)
at fireGtmPixels (main.js:83)
at eval (main.js:69)
Array.prototype.sort mutates the array that you call it on.
Assuming that cmp.getPurposesAllowed() returns a reference to an array in your vuex store, your are mutating that array from the store, just like the error message says.
Solution: Make a shallow copy of the array before:
let purposeArrayZ = cmp.getPurposesAllowed().slice();