I want to re-configure AR session after session has been running. I want to change Augmented Images database.
I don't seem to find a way to set the reset the session configuration.
getSessionConfiguration(Session session)
This function only seems to be called once at the beginning.
Is there a way to re-configure? Should I not be using the fragment?
I make changes to my config on the fly. You can accomplish this by extending ARFragment and having access to the config or simply be accessing the ARFragment from your xml within your activity. Here is an example.
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.HORIZONTAL_AND_VERTICAL
configure(changedConfig)
}
That's it, just call configure(myNewConfig) and it will update it for you.
Of course in this example, I get the current config, modify it and put it back, but you could replace it if preferred.
Related
I have a celery app which has to be pinged by another app. This other app uses json to serialize celery task parameters, but my app has a custom serialization protocol. When the other app tries to ping my app (app.control.ping), it throws the following error:
"Celery ping failed: Refusing to deserialize untrusted content of type application/x-stjson (application/x-stjson)"
My whole codebase relies on this custom encoding, so I was wondering if there is a way to configure a json serialization but only for this ping, and to continue using the custom encoding for the other tasks.
These are the relevant celery settings:
accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_accept_content = [CUSTOM_CELERY_SERIALIZATION, "json"]
result_serializer = CUSTOM_CELERY_SERIALIZATION
task_serializer = CUSTOM_CELERY_SERIALIZATION
event_serializer = CUSTOM_CELERY_SERIALIZATION
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Specs: celery=5.1.2
python: 3.8
OS: Linux docker container
Any help would be much appreciated.
Changing any of the last 3 to [CUSTOM_CELERY_SERIALIZATION, "json"] causes the app to crash, so that's not an option.
Because result_serializer, task_serializer, and event_serializer doesn't accept list but just a single str value, unlike e.g. accept_content
The list for e.g. accept_content is possible because if there are 2 items, we can check if the type of an incoming request is one of the 2 items. It isn't possible for e.g. result_serializer because if there were 2 items, then what should be chosen for the result of task-A? (thus the need for a single value)
This means that if you set result_serializer = 'json', this will have a global effect where all the results of all tasks (the returned value of the tasks which can be retrieved by calling e.g. response.get()) would be serialized/deserialized using the json-serializer. Thus, it might work for the ping but it might not for the tasks that can't be directly serialized/deserialized to/from JSON which really needs the custom stjson-serializer.
Currently with Celery==5.1.2, it seems that task-specific setting of result_serializer isn't possible, thus we can't set a single task to be encoded in 'json' and not 'stjson' without setting it globally for all, I assume the same case applies to ping.
Open request to add result_serializer option for tasks
A short discussion in another question
Not the best solution but a workaround is that instead of fixing it in your app's side, you may opt to just add support to serialize/deserialize the contents of type 'application/x-stjson' in the other app.
other_app/celery.py
import ast
from celery import Celery
from kombu.serialization import register
# This is just a possible implementation. Replace with the actual serializer/deserializer for stjson in your app.
def stjson_encoder(obj):
return str(obj)
def stjson_decoder(obj):
obj = ast.literal_eval(obj)
return obj
register(
'stjson',
stjson_encoder,
stjson_decoder,
content_type='application/x-stjson',
content_encoding='utf-8',
)
app = Celery('other_app')
app.conf.update(
accept_content=['json', 'stjson'],
)
You app remains to accept and respond stjson format, but now the other app is configured to be able to parse such format.
Was using CacheConfiguration in Ignite until I stuck with issue on how to authenticate.
Because of that I was starting to change the CacheConfiguration to clientCacheConfiguration. However after converting it to CacheConfiguration I started to notice that it
does not able to save into table because it lack of method setIndexedTypes eg.
Before
CacheConfiguration<String, IgniteParRate> cacheCfg = new CacheConfiguration<>();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
cacheCfg.setIndexedTypes(String.class, IgniteParRate.class);
New
ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
//cacheCfg.setIndexedTypes(String.class, IgniteParRate.class); --> this is not provided
I still need the table to be populated so it easier for us to verify ( using Client IDE like DBeaver)
Any way to solve this issue?
If you need to create tables/cache dynamically using the thin-client, you'll need to use the setQueryEntities() method to define the columns available to SQL "manually". (Passing in the classes with annotations is basically a shortcut for defining the query entities.) I'm not sure why setIndexedTypes() isn't available in the thin-client; maybe a question for the developer mailing list.
Alternatively, you can define your caches/tables in advance using a thick client. They'll still be available when using the thin-client.
To add to existing answer, you can also try to use cache templates for that.
https://apacheignite.readme.io/docs/cache-template
Pre-configure templates, use them when creating caches from thin client.
I added some custom properties in the 'updateAttribute' processor using the '+' button. For example: I declared a property 'DBConnectionURL' and gave the value as 'jdbc:mysql://localhost:3306/test'. Then, in the 'DBCPConnectionPool' service controller, I simple used the value'${DBConnectionURL}' for 'Database Connection URL' property. But, I manually gave the value for 'DBConnectionURL' property.I want a way where I can feed the value dynamically from a file, so that i just need to change the value in the file and the value for 'DBConnectionURL' changes dynamically based on the value present in the file. Is there a way to do it?
Rishab,
You have to use nifi variable registry.
In conf/nifi.properties, you could configure the below configuration in it for dynamically update a value in your data flow.
nifi.variable.registry.properties=./dynamic.properties
You can give your variables in that file dynamic.properties it should present in conf directory.
For an example, If dynamic.properties files contains below values
DBCPURL= jdbc://<host>:<port>
you can use that in your data flow by using ${DBCPURL}
Note: You should restart nifi services if you change any configuration in conf/nifi.properties.Otherwise your changes not worked in dataflow.
Feel free to accept it be answer if it worked for you.
I have a dependency with parameters constructor. When I call the action more than 1x, it show this error:
Error activating IValidationPurchaseService
More than one matching bindings are available.
Activation path:
1) Request for IValidationPurchaseService
Suggestions:
1) Ensure that you have defined a binding for IValidationPurchaseService only once.
public ActionResult Detalhes(string regionUrl, string discountUrl, DetalhesModel detalhesModel)
{
var validationPurchaseDTO = new ValidationPurchaseDTO {...}
KernelFactory.Kernel.Bind<IValidationPurchaseService>().To<ValidationPurchaseService>()
.WithConstructorArgument("validationPurchaseDTO", validationPurchaseDTO)
.WithConstructorArgument("confirmPayment", true);
this.ValidationPurchaseService = KernelFactory.Kernel.Get<IValidationPurchaseService>();
...
}
I'm not sure what are you trying to achieve by the code you cited. The error is raised because you bind the same service more than once, so when you are trying to resolve it it can't choose one (identical) binding over another. This is not how DI Container is supposed to be operated. In your example you are not getting advantage of your DI at all. You can replace your code:
KernelFactory.Kernel.Bind<IValidationPurchaseService>().To<ValidationPurchaseService>()
.WithConstructorArgument("validationPurchaseDTO", validationPurchaseDTO)
.WithConstructorArgument("confirmPayment", true);
this.ValidationPurchaseService = KernelFactory.Kernel.Get<IValidationPurchaseService>();
With this:
this.ValidationPurchaseService = new ValidationPurchaseService(validationPurchaseDTO:validationPurchaseDTO, confirmPayment:true)
If you could explain what you are trying to achieve by using ninject in this scenario the community will be able to assist further.
Your KernelFactory probably returns the same kernel (singleton) on each successive call to the controller. Which is why you add a similar binding every time you hit the URL that activates this controller. So it probably works the first time and starts failing after the second time.
I am using rrd4j to do what rrd4j does, and it works great. However, if I shut down my app and start it back up again, the data from the previous session will be gone.
I am using a normal file backend, like so:
RrdDef rrdDef = new RrdDef( "/path/to/my/file", 3000 );
Is there a setting or something I need to trigger to make rrd4j load the data from the previous session?
Seems you should use RrdDb("/path/to/my/file") instead. From Javadocs:
RrdDb(java.lang.String path): Constructor used to open already
existing RRD in R/W mode, with a
default storage (backend) type (file
on the disk).
And also:
RrdDb(RrdDef rrdDef):
Constructor used to create new RRD object from the definition.