There appears to be a shutter speed bug with the Sony A7S and the Smart Remote Control app (version 4.10).
"0.6" seconds is listed by -getSupportedShutterSpeed, but when that speed is POST'ed with -setShutterSpeed, an HTTP error code of 500 is returned (without the speed being set).
Other shutter speeds appear to work.
I agree with you statement. Sony person must fix mulfunction.
I experience same phenomena in A7R2. I'm really annoyed by this "bug". My current environment is firmware v3.0, Remote Control App v4.10. This phenomena has kept from firmware v2.0 and Remote contorol App V4.0, as far as I tried.
I make one-line command in Linux to prove mulfunction. Results are below.
$ one_liners/setShutterSpeed.py 0.8 {"method": "setShutterSpeed",
"params": ["0.8"], "id": 1, "version": "1.0"} {"result":[0],"id":1}
$ one_liners/setShutterSpeed.py 0.6 {"method": "setShutterSpeed",
"params": ["0.6"], "id": 1, "version": "1.0"} API camera failed.
{"method": "setShutterSpeed", "params": ["0.6"], "id": 1, "version":
"1.0"}
$ one_liners/setShutterSpeed.py 0.5 {"method": "setShutterSpeed",
"params": ["0.5"], "id": 1, "version": "1.0"} {"result":[0],"id":1}
I just figured out what is happening.
For a shutter speed of 0.6 seconds, one must send the quoted string "0.6\"".
For other speeds like 0.5 seconds and 0.8 seconds, strings of either the form "0.8" or the quoted form with seconds unit, "0.8\"" both works.
Related
I observed that it took about 6 hours from the time of setting up Diagnostics (the newer offering still in preview) for the Queue Message Count metric to move from 0 to the actual total number of messages in queue. The other capacity metrics Queue Capacity and Queue Count took about 1 hour to reflect actual values.
Can anyone shed light on how these metrics are updated? It would be good to know how to predict the accuracy of the graphs.
I am concerned because if the latency of these metrics is typically this large then an alert based on queue metrics could take too long to raise.
Update:
Platform metrics are created by Azure resources and give you visibility into their health and performance. Each type of resource creates a distinct set of metrics without any configuration required. Platform metrics are collected from Azure resources at one-minute frequency unless specified otherwise in the metric's definition.
And 'Queue Message Count' is platform metrics.
So it should update the data every 1 minute.
But it didn't. And this is not a problem that only occur on portal. Even you use rest api to get the QueueMessageCount, it still not update after 1 minute:
https://management.azure.com/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/microsoft.insights/metrics?interval=PT1H&metricnames=QueueMessageCount&aggregation=Average&top=100&orderby=Average&api-version=2018-01-01&metricnamespace=Microsoft.Storage/storageAccounts/queueServices
{
"cost": 59,
"timespan": "2021-05-17T08:57:56Z/2021-05-17T09:57:56Z",
"interval": "PT1H",
"value": [
{
"id": "/subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/0730BowmanWindow/providers/Microsoft.Storage/storageAccounts/0730bowmanwindow/queueServices/default/providers/Microsoft.Insights/metrics/QueueMessageCount",
"type": "Microsoft.Insights/metrics",
"name": {
"value": "QueueMessageCount",
"localizedValue": "Queue Message Count"
},
"displayDescription": "The number of unexpired queue messages in the storage account.",
"unit": "Count",
"timeseries": [
{
"metadatavalues": [],
"data": [
{
"timeStamp": "2021-05-17T08:57:00Z",
"average": 1.0
}
]
}
],
"errorCode": "Success"
}
],
"namespace": "Microsoft.Storage/storageAccounts/queueServices",
"resourceregion": "centralus"
}
This may be an issue that needs to be reported to the azure team. It is so slow, it even loses its practicality. I think send an alert based on this is a bad thing(it’s too slow).
Maybe you can design you own logic by code to check the QueueMessageCount.
Just a sample(C#):
1, Get Queues
Then get all of the queue names.
2, Get Properties
Then get the number of the message in each queue.
3, sum the obtained numbers.
4, send custom alert.
Original Answer:
At first, after I send message to one queue in queue storage, the 'Queue Message Count' also remains stubbornly at zero on my side, but a few hours later it can get the 'Queue Message Count':
I thought it would be a bug, but it seems to work well now.
I've a TABLE created from KSQL query and inut Stream that is backed by a Kafka Topic.
This topic is sink to s3 using Kafka Connect.
In the topic, I have around 1k msgs/sec.
The topic has 6 partitions and 3 replicas.
I have a strange output ratio. Sink seems to be strange.
Here is my monitoring :
monitoring
You can see the first chart shows Input ratio B/s, the second Out ratio and the third the lag computed using Burrow.
Here is my s3-sink properties file :
{
"name": "sink-feature-static",
"config": {
"topics": "FEATURE_APP_STATIC",
"topics.dir": "users-features-stream",
"tasks.max": "6",
"consumer.override.auto.offset.reset": "latest",
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"format.class": "io.confluent.connect.s3.format.parquet.ParquetFormat",
"parquet.codec": "snappy",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"partitioner.class": "io.confluent.connect.storage.partitioner.TimeBasedPartitioner",
"path.format": "'\'part_date\''=YYYY-MM-dd/'\'part_hour\''=HH",
"partition.duration.ms": "3600000",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url": "http://cp-schema-registry.schema-registry.svc.cluster.local:8081",
"flush.size": 1000000,
"s3.part.size": 5242880,
"rotate.interval.ms": "600000",
"rotate.schedule.interval.ms": "600000",
"locale": "fr-FR",
"timezone": "UTC",
"timestamp.extractor": "Record",
"schema.compatibility": "NONE",
"aws.secret.access.key": "secretkey",
"aws.access.key.id": "accesskey",
"s3.bucket.name": "feature-store-prod-kafka-test",
"s3.region": "eu-west-1"
}
}
Here is what I'm observing in s3 bucket : s3 bucket
In these files I have small amount of messages in parquet.snappy. (Sometimes only 1 sometimes more, ...). Around 2 files per seconds per partition. (As I'm using Record timestamp, it's because it's catching up the lag I think).
What I was expecting is :
File commit every 1000000 messages (flush.size) or every 10 minutes (rotate.schedule.interval.ms).
So I'm expecting (As 1M messages > 10min * 1Kmsg/s):
1/ 6 (every 10min) * 6 (nb of partitions) parquet files every hour
2/ Or If I was wrong, At least files with 1M messages inside ...
But neither 1/ or 2/ is observed ...
And I have a huge lag and a flush/commit in s3 file every hour (see monitoring).
Does "partition.duration.ms": "3600000" leads to that observation ?
Where am I wrong ?
Why I do not see a continuous Output flush of data but such spikes ?
Thanks !
Rémy
so yes first set partition.duration.ms to 10 minutes if you want one s3 object per 10 minutes. Second, if you really don't want small files set rotate.interval.ms=-1 and rotate.schedule.interval.ms to 10 minutes (however you loose guarantee of exactly once delivery).
When using rotate.interval.ms, what happens is each time you receive a timestamp earlier than the file offset, kafka-connect flushes leading to very small files at each beginning and end of the hour, it does ensure exactly once delivery in all failures cases.
I have following host file:
{
"version": "2.0",
"extensions": {
"queues": {
"maxPollingInterval": "00:00:02",
"visibilityTimeout": "00:00:30",
"batchSize": 16,
"maxDequeueCount": 3,
"newBatchThreshold": 8
}
}
}
I would expect with setup there could never be more than batchSize+newBatchThreshold number of instances running. But I realized when messages are dequed they are run instantly and not just added to the back of the queue. This means you can end up with a very high amount of instances causing a lot of 429 (to many requests). Is there anyway to configure the function app to just add the dequeded messages to the back of the queue?
It was not related to dequeueCount. The problem was because it was a consumption plan, and then you cant control the amount of instances. After chaning to a Standard plan it worked as expected.
Lamps in my living rooms switch on 15 minutes before sunset (by making use of a rule and the daylight sensor in the bridge). However I want also that the lamps in the garden switch on but 15 minutes AFTER sunset. There is only one sensor for daylight, so question is if (and how) I could use a (new) rule, which will switch the garden lights on 30 minutes later than the living lamps (which is equal to 15 minutes after sunset).
You can create a schedule timer that expires after 30 minutes and turns on your living lamps. Make sure the "status" of the schedule is initially disabled and that "autodelete" is false. See the hue API for more details about creating schedules (registration required)
In the rule that turns on your garden lights add an additional action that enables the schedule. When the schedule timer expires the lights will go on and the schedule will be disabled again.
The schedule would look something like this (update the command for your situation, the example below turns on all lights):
{
"autodelete": false,
"status": "disabled",
"localtime": "PT00:30:00",
"name": "Sunset timer",
"command": {
"method": "PUT",
"address": "/api/<your api user>/groups/0/action",
"body": {"on": true}
}
}
The action to start the schedule would be:
{
"address": "/schedules/<your schedule id>",
"method": "PUT",
"body": {"status": "enabled"}
}
I'm trying to use hawtio to view some enqueued topics in ActiveMQ.
But when I click on view messages, I get a blank list as output (even though I know the contents are not blank).
This is the error message I get when I browse around my localhost on /8080/hawtio/, so I'm guessing something regarding this is causing it.
Failed to get a response! { "error_type": "javax.management.InstanceNotFoundException", "error": "javax.management.InstanceNotFoundException : org.fusesource.insight:type=LogQuery", "status": 404, "request": { "operation": "logResultsSince", "mbean": "org.fusesource.insight:type=LogQuery", "arguments": [ 0 ], "type": "exec" }, "stacktrace": "javax.management.InstanceNotFoundException: org.fusesource.insight:type=LogQuery\n\tat com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)\n\tat com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBeanInfo(DefaultMBeanServerInterceptor.java:1375)\n\tat com.sun.jmx.mbeanserver.JmxMBeanServer.getMBeanInfo(JmxMBeanServer.java:920)\n\tat org.jolokia.handler.ExecHandler.extractMBeanParameterInfos(ExecHandler.java:167)\n\tat org.jolokia.handler.ExecHandler.extractOperationTypes(ExecHandler.java:133)\n\tat org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:84)\n\tat org.jolokia.handler.ExecHandler.doHandleRequest(ExecHandler.java:40)\n\tat org.jolokia.handler.JsonRequestHandler.handleRequest(JsonRequestHandler.java:89)\n\tat org.jolokia.backend.MBeanServerExecutorLocal.handleRequest(MBeanServerExecutorLocal.java:109)\n\tat org.jolokia.backend.MBeanServerHandler.dispatchRequest(MBeanServerHandler.java:102)\n\tat org.jolokia.backend.LocalRequestDispatcher.dispatchRequest(LocalRequestDispatcher.java:91)\n\tat org.jolokia.backend.BackendManager.callRequestDispatcher(BackendManager.java:388)\n\tat org.jolokia.backend.BackendManager.handleRequest(BackendManager.java:150)\n\tat org.jolokia.http.HttpRequestHandler.executeRequest(HttpRequestHandler.java:197)\n\tat org.jolokia.http.HttpRequestHandler.handlePostRequest(HttpRequestHandler.java:131)\n\tat org.jolokia.jvmagent.JolokiaHttpHandler.executePostRequest(JolokiaHttpHandler.java:195)\n\tat org.jolokia.jvmagent.JolokiaHttpHandler.handle(JolokiaHttpHandler.java:143)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)\n\tat sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:80)\n\tat sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:677)\n\tat com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:77)\n\tat sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:649)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat java.lang.Thread.run(Thread.java:724)\n" }
Incidentally ActiveMQ doesn't support browsing of topics; only queues
You'll need to upgrade to hawt.io 1.2M27, which fixes this issue. 1.2M26 was assuming the log query was always installed, M27 removed it from the default.
Also we don't yet support all activemq message types, there's an open issue for that -> https://github.com/hawtio/hawtio/issues/655
So if your messages are not text messages this could be why you're not seeing the message body.