I'm trying to use Spring Data Redis APIs, and want to execute a set with nx option. However, I don't find a relevant method in RedisTemplate. How can I do this?
The method name is setIfAbsent
http://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/BoundValueOperations.html#setIfAbsent-V-
Related
Is it possible to return the script that was loaded into Redis through SCRIPT LOAD using the SHA value?
No, currently (v6.x) there isn't. There's an open PR (https://github.com/redis/redis/pull/4646) if you want to add your feedback there.
Was using CacheConfiguration in Ignite until I stuck with issue on how to authenticate.
Because of that I was starting to change the CacheConfiguration to clientCacheConfiguration. However after converting it to CacheConfiguration I started to notice that it
does not able to save into table because it lack of method setIndexedTypes eg.
Before
CacheConfiguration<String, IgniteParRate> cacheCfg = new CacheConfiguration<>();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
cacheCfg.setIndexedTypes(String.class, IgniteParRate.class);
New
ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
//cacheCfg.setIndexedTypes(String.class, IgniteParRate.class); --> this is not provided
I still need the table to be populated so it easier for us to verify ( using Client IDE like DBeaver)
Any way to solve this issue?
If you need to create tables/cache dynamically using the thin-client, you'll need to use the setQueryEntities() method to define the columns available to SQL "manually". (Passing in the classes with annotations is basically a shortcut for defining the query entities.) I'm not sure why setIndexedTypes() isn't available in the thin-client; maybe a question for the developer mailing list.
Alternatively, you can define your caches/tables in advance using a thick client. They'll still be available when using the thin-client.
To add to existing answer, you can also try to use cache templates for that.
https://apacheignite.readme.io/docs/cache-template
Pre-configure templates, use them when creating caches from thin client.
In our infrastructure, we set multiples grains on the minion including an "environment" and "app" grain.
When we use the cli, we can get the correct minions using:
salt -C "G#app:middle_tier_1 and G#environment:dev" test.ping
But if we try to use the cherrypy api, only got a result if set only one target like:
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1"}]
or
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#environment:dev"}]
with multiples one, don't get any
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1 and G#environment:dev"}]
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":["G#app:middle_tier_1","G#environment:dev"]}]
According with the documentation, i can use a list in the tgt paramenter.
I have looked their documentation fairly extensively and have found no examples of this type of minion targeting.
Is this even possible, and if so, how would I go about doing it?
Extra info:
salt-master 2018.3.2 (Oxygen)
salt-api 2018.3.2 (Oxygen)
Thanks in advance!
If you want to use multiple grains, the tgt_type is compound not grains.
See: https://docs.saltstack.com/en/latest/ref/clients/#salt-s-client-interfaces, https://docs.saltstack.com/en/latest/topics/targeting/compound.html
I'm trying to create an Intergration Automation Script for a PUBLISHED Channel which updates a database field.
Basically for WOACTIVITY I just want a field value setting to 1 for the Work Order if the PUBLISHED channel is triggered.
Any ideas or example scripts that anyone has or can help with please? Just can't get it work.
What about using a SET processing rule on the publish channel? Processing rules are evaluated every time the channel is activated, and a SET action will let you set an attribute for the parent object to a specified value. You can read more about processing rules here.
Adding a new answer because experience, in case it helps anyone.
If you create an automation script for integration against a Publish Channel and then select External Exit or User Exit there's an implicit variable irData that has access to the MBO being worked on. You can then use that MBO as you would in any other script. Note that because you're changing a record that's integrated you'll probably want a skip rule in your publish channel that skips records with your value set or you may run into an infinite publish --> update --> publish loop.
woMbo = irData.getCurrentMbo()
woMboSet = woMbo.getThisMboSet()
woMbo.setValue("FIELD", 1)
woMboSet.save()
I'm trying to test my Luigi pipelines inside a vagrant machine using FakeS3 to simulate my S3 endpoints. For boto to be able to interact with FakeS3 the connection must be setup with the OrdinaryCallingFormat as in:
from boto.s3.connection import S3Connection, OrdinaryCallingFormat
conn = S3Connection('XXX', 'XXX', is_secure=False,
port=4567, host='localhost',
calling_format=OrdinaryCallingFormat())
but when using Luigi this connection is buried in the s3 module. I was able to pass most of the options by modifying my luigi.cfg and adding an s3 section as in
[s3]
host=127.0.0.1
port=4567
aws_access_key_id=XXX
aws_secret_access_key=XXXXXX
is_secure=0
but I don't know how to pass the required object for the calling_format.
Now I'm stuck and don't know how to proceed. Options I can think of:
Figure out how to pass the OrdinaryCallingFormat to S3Connection through luigi.cfg
Figure out how to force boto to always use this calling format in my Vagrant machine, by setting an unknown option to me either in .aws/config or boto.cfg
Make FakeS3 to accept the default calling_format used by boto that happens to be SubdomainCallingFormat (whatever it means).
Any ideas about how to fix this?
Can you not pass it into the constructor as kwargs for the S3Client?
client = S3Client(aws_access_key, aws_secret_key,
{'calling_format':OrdinaryCallingFormat()})
target = S3Target('s3://somebucket/test', client=client)
I did not encounter any problem when using boto3 connect to fakeS3.
import boto3
s3 = boto3.client(
"s3", region_name="fakes3",
use_ssl=False,
aws_access_key_id="",
aws_secret_access_key="",
endpoint_url="http://localhost:4567"
)
no specially calling method required.
Perhaps I am wrong that you really need OrdinaryCallingFormat, If my code doesn't work, please go through the github topic boto3 support on :
https://github.com/boto/boto3/issues/334
You can set it with the calling_format parameter. Here is a configuration example for fake-s3:
[s3]
aws_access_key_id=123
aws_secret_access_key=abc
host=fake-s3
port=4569
is_secure=0
calling_format=boto.s3.connection.OrdinaryCallingFormat