Affinity Backup Filter - ignite

Trying to set up an affinity backup filter. Most of the bits are clear and I am following the details outlined here - https://ignite.apache.org/releases/latest/javadoc/index.html?org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
which talks about backup by availability zone. And each node can set the value of that AZ.
However, the thing I am not very clear on is where this AZ value goes for each node, the above link says "that the environment variable "AVAILABILTY_ZONE" be set appropriately on each node via some means external to Ignite".
I see a couple of options where this could be set,
Use System.setProperty()(based on the above comment around environment variable)
Set it as part of IgniteConfiguration.setUserAttributes() (based on looking at the ClusterNodeAttributeAffinityBackupFilter source which is comparing node attributes)
Any inputs around this are helpful.
TIA

I suppose this documentation might be better structured.
Main idea is to set a user-defined attribute, for example "color" to red or blue.
<bean class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="userAttributes">
<map>
<entry key="color" value="blue"/>
</map>
</property>
</bean>
Or config.setUserAttributes(F.asMap("color", "red"));
And use it in your backup filter implementation:
<bean class="org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction">
<property name="affinityBackupFilter">
<bean class="org.apache.ignite.cache.affinity.rendezvous.ClusterNodeAttributeAffinityBackupFilter">
<constructor-arg>
<array value-type="java.lang.String">
<value>color</value>
</array>
</constructor-arg>
</bean>
</property>
</bean>
In that case, prior to saving a backup copy, Ignite will check if the "color" attribute of the current node differs from the backup's one (i.e. if it's "read", then we need to search for a "blue" node).

Related

apache ignite SQL query too slow

3 node cluster.
Each node has 2 * L5520 physical processor,and 64GB memory,1TB HDD
I used COPY FROM ... FORMAT CSV import the data to the ignite,Now I execute SQL query in the JDBC console, it's so slow. Can someone tell me any optimizations?
you are missing indices on cache.
<property name="indexes">
<list>
<bean class="org.apache.ignite.cache.QueryIndex">
<constructor-arg value="xyzFromKeyorVal"/>
</bean>
</list>
</property>
Add above property to cacheConfiguration. This 'xyzFromKeyorVal' is nothing but any property from key or value object that you want to put index on.

Jackrabbit index boost configuration

I'm trying to understand the use of 'boosting' properties in the indexing configuration for CQ5. I thought I understood from http://wiki.apache.org/jackrabbit/IndexingConfiguration that setting a boost determined how far up the list an item would be returned as a search result. So I tried adding the following boost lines to my default CQ5 indexing configuration:
<index-rule nodeType="nt:base">
<property boost="5.0">jcr:title</property>
<property boost="5.0">history:title</property>
<property boost="3.0">history:description</property>
<property boost="3.0">history:caption</property>
<property boost="2.0">text</property>
<property nodeScopeIndex="false">analyticsProvider</property>
<property nodeScopeIndex="false">analyticsSnippet</property>
<property nodeScopeIndex="false">hideInNav</property>
<property nodeScopeIndex="false">offTime</property>
<property nodeScopeIndex="false">onTime</property>
:
:
<property isRegexp="true">.*:.*</property>
</index-rule>
The intent was that, in a full text search, text found in jcr:title or history:title properties would be the most relevant followed by history:description, history:caption and, finally, text.
I deleted the index information from the repository and from the workspace, then restarted CQ and let it rebuild all of the indexes.
Now when I do a full text search, I'm only getting results if the search text is in the nodename itself - nothing from description, caption, etc.
Obviously I've done something wrong but I'm not sure what. Any help would be greatly appreciated.

CAS LDAP Search Subtree

I'm using last version of Jasig CAS server (4.0.0) with an LDAP server.
Users are stored under this LDAP structure : ou=Users,ou=SSOTEST,dc=mycompany,dc=com
What I want is to search an user from a top level (example : ou=SSOTEST,dc=mycompany,dc=com).
CAS server has an LdapPersonAttributeDao bean which is looking for an object matching a search filter. Here is the code for this bean :
<bean id="ldapPersonAttributeDao"
class="org.jasig.cas.persondir.LdapPersonAttributeDao"
p:connectionFactory-ref="searchPooledLdapConnectionFactory"
p:baseDN="ou=SSOTEST,dc=company,dc=com"
p:searchControls-ref="searchControls"
p:searchFilter="uid={0}">
<property name="resultAttributeMapping">
<map>
<!--
| Key is LDAP attribute name, value is principal attribute name.
-->
<entry key="memberOf" value="userMemberOf" />
<entry key="cn" value="userCn" />
</map>
</property>
</bean>
And now the searchControls bean which do a lookup at SUBTREE_SCOPE (2) level (according toSearchControls scope level values).
<bean id="searchControls"
class="javax.naming.directory.SearchControls"
p:searchScope="2"
p:countLimit="10" />
When I run my CAS server and I try to authenticate, everything works but there are no extra attributes returned.
I think the problem comes from searchScope, which don't seems to be set to wanted value.
Here is output log from the server :
<execute request=[org.ldaptive.SearchRequest#-1312441815::baseDn=ou=SSOTEST,dc=mycompany,dc=com, searchFilter=[org.ldaptive.SearchFilter#-3391
91059::filter=uid={0}, parameters={0=myuser}], returnAttributes=[], searchScope=null, timeLimit=0, sizeLimit=10 [...]
I know its been some time since this question was asked. But I managed to fix this problem by adding:
<bean class="org.springframework.context.annotation.CommonAnnotationBeanPostProcessor" />
to deployerConfigContext.xml.
The cause of this issue was that the initalize method in LdapPersonAttributeDao was not being invoked because the #PostConstruct annotation wasn't being executed. For this reason the searchScope variable was never set.

Is an XA transaction really atomic?

It seems that I'm not completely understand how an XA transaction works. I thought that it is atomic: I thought that when I commit a transaction, then new messages and new data will be available in the same time.
This misunderstanding led me to the following issue:
new rows are inserted to DB and a message is sent to a queue in a transactional route. In another route the message is received. Then this route tries to perform some manipulations with the rows that were inserted in the previous route. But it doesn't see them!
The second route is configured so it rolls back a message to the queue when an exception is happened. And I see that after a second run the route sees the rows!
As a conclusion I would ask the next questions:
Is an XA transaction really atomic?
If no, how can I configure commit order for my transactional resources?
Additional note: the issue is found in Fuse ESB/ServiceMix 4.4.1
2 Jake:
My camel context configuration looks like following:
<osgi:reference id="osgiPlatformTransactionManager" interface="org.springframework.transaction.PlatformTransactionManager"/>
<osgi:reference id="osgiJtaTransactionManager" interface="javax.transaction.TransactionManager"/>
<osgi:reference id="myDataSource"
interface="javax.sql.DataSource"
filter="(osgi.jndi.service.name=jdbc/postgresXADB)"/>
<bean id="PROPAGATION_MANDATORY" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_MANDATORY"/>
</bean>
<bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/>
</bean>
<bean id="jmstx" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="configuration" ref="jmsTxConfig" />
</bean>
<bean id="jmsTxConfig" class="org.apache.camel.component.jms.JmsConfiguration">
<property name="connectionFactory" ref="jmsXaPoolConnectionFactory"/>
<property name="transactionManager" ref="osgiPlatformTransactionManager"/>
<property name="transacted" value="false"/>
<property name="cacheLevelName" value="CACHE_NONE"/>
<property name="concurrentConsumers" value="${jms.concurrentConsumers}" />
</bean>
<bean id="jmsXaPoolConnectionFactory" class="org.apache.activemq.pool.XaPooledConnectionFactory">
<property name="maxConnections" value="${jms.maxConnections}" />
<property name="connectionFactory" ref="jmsXaConnectionFactory" />
<property name="transactionManager" ref="osgiJtaTransactionManager" />
</bean>
<bean id="jmsXaConnectionFactory" class="org.apache.activemq.ActiveMQXAConnectionFactory">
<property name="brokerURL" value="${jms.broker.url}"/>
<property name="redeliveryPolicy">
<bean class="org.apache.activemq.RedeliveryPolicy">
<property name="maximumRedeliveries" value="-1"/>
<property name="initialRedeliveryDelay" value="2000" />
<property name="redeliveryDelay" value="5000" />
</bean>
</property>
</bean>
DB data source is configured as following:
<bean id="myDataSource" class="org.postgresql.xa.PGXADataSource">
<property name="serverName" value="${db.host}"/>
<property name="databaseName" value="${db.name}"/>
<property name="portNumber" value="${db.port}"/>
<property name="user" value="${db.user}"/>
<property name="password" value="${db.password}"/>
</bean>
<service ref="myDataSource" interface="javax.sql.XADataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="jdbc/postgresXADB"/>
<entry key="datasource" value="postgresXADB"/>
</service-properties>
</service>
I'm not an expert in this stuff, but my view would be that the atomicity that XA provides guaruntees only that:
Either the entire commit occurs or the entire commit rolls back.
That the whole commit/rollback completes before the commit request returns to whoever called it.
I don't think any guaruntee is made regarding the individual participants completing at the same instant, nor is there any kind of 'commit dependency tree' maintained guaranteeing that the subsequent processing only happens on participants who have committed.
I think to achieve what you want, you might need to put the message queue outside the main transaction... Which destroys the whole point of the transaction in the first place :(
I think you might just have to put a retry/timeout loop in your downstream processing. The alternative might be to explore the concurrency options to see if you an allow that downstream transaction to 'see' the upstream.
Hopefully this answer will prompt someone with more knowladge of this stuff to chip in!
For XA transactions, when you commit, then camel will run xa commit for each DB, ensuring that all XA transaction will be committed eventually, not simultaneously. Since the data in each DB is committed separately, it is not atomic, and it is impossible to make data modification across database atomic.
For your application, there may be two choice to avoid the problem.
Don't use XA transactions, instead use OutBox pattern. You can update part of the data, send message to queue, and then return. Any order dependent operation is put into queue, where you can easily custom the order.
Base on your XA solution, when you want to read the newest data, you call select for update, which will wait for the row lock holding by unfinished XA transaction. When XA transactions returned, select for update will return the newest data.
Your AMQ_SCHEDULED_DELAY header is a workaround, but not working when exceptions happen.

How do I configure Spring Security authentication to deal with a complex Active Directory / LDAP account tree?

(Context: I'm an experienced programmer, but new to LDAP, AD and Spring.)
We are a Windows shop, so all of our authentication is done with Active Directory. We are attempting to integrate a third-party product that is written in Java, so it does all of its authentication using Spring Security. So far, so good -- they've done that integration before, and there's a good deal online about how to set things up.
The problem is, our AD setup is a bit complex: in particular, our user accounts exist in various nodes in the AD/LDAP tree. To give a simplified example, say the LDAP tree looks like this:
DC=my-domain,DC=com
+ CN=Users
++ CN=user1,CN=Users,DC=my-domain,DC=com
+ CN=Staff
++ CN=user2,CN=Staff,DC=my-domain,DC=com
The thing is, all of the examples I have found let me authenticate either user1 or user2, but not both. That is, the following XML snippet will work to authenticate user1 against roles defined under "Groups":
<security:ldap-server url="ldap://my-domain.com:389" manager-dn="CN=manager_svc,OU=System Users,DC=my-domain,DC=com" manager-password="MyPa55w0rd"/>
<security:ldap-authentication-provider
user-dn-pattern=""
user-search-base="CN=Users,DC=my-domain,DC=com"
user-search-filter="(&(sAMAccountName={0})(objectclass=user))"
group-search-base="OU=Groups,DC=mydomain,DC=com"
group-search-filter="member={0}"
/>
but that won't authenticate user2, since he doesn't match the user-search-base. Contrariwise, I can change user-search-base to CN=Staff,DC=my-domain,DC=com, which will work for user2, but then it won't work for user1.
So the question is, how do I make this search work for user accounts that are scattered across the AD/LDAP tree? I can imagine two possibilities, but I haven't figured out how to do either yet:
On the one hand, if I can make user-search-base multi-valued, that solves my problem easily and correctly: I just put in all of the locations where user accounts might be found. So far, all of my attempts to do this have met with one error or another, but I'm still experimenting.
OTOH, there is Subtree scoping of the search. I can see in the interactive LDAP tools that search can be either single-level or subtree. Far as I can tell, Spring out of the box is doing single-level. I can see that the underlying FilterBasedLdapUserSearch class has a setSearchSubtree() method, which looks like what I want, but I can't find a way to set that to true from the XML. (For now, let's assume that it isn't feasible to change the underlying Java program.)
The first option would be ideal, since it is probably much more efficient, but if that isn't possible and the second is, I suspect we can make it work.
I have a suspicion that the second approach is possible using thorny bean hackery, but I know next to nothing about beans, so I'd rather not wade into those thickets by myself. Does anybody have a good recipe to recommend?
Thanks much for any guidance you can provide...
I solved this by using a searchBase with the empty string value (this uses the root as the searchbase, just like prule's answer), but I also had to set the property "referral" to "follow", otherwise I got a PartialResultException!
DefaultSpringSecurityContextSource contextSource = new DefaultSpringSecurityContextSource(...);
contextSource.setReferral("follow");
You could try searching from the domain root, if that is feasible, though that can cause problems with AD.
Alternatively, use of explicit bean configuration is probably your best option. You can inject a custom LdapUserSearch implementation into the BindAuthenticator bean, which searches under all the necessary locations. If you look at the example in the docs, it shows a FilterBasedLdapUserSearch configuration. You could either use a couple of these, or implement the interface yourself from scratch. Here's a quick hack as an example:
public class CustomLdapSearch implements LdapUserSearch {
public static final String SAM_FILTER="(&(sAMAccountName={0})(objectclass=user))"
final LdapUserSearch users;
final LdapUserSearch staff;
public CustomLdapSearch(BaseLdapPathContextSource contextSource) {
users = new FilterBasedLdapUserSearch("CN=Users,DC=my-domain,DC=com", SAM_FILTER, contextSource);
staff = new FilterBasedLdapUserSearch("CN=Staff,DC=my-domain,DC=com", SAM_FILTER, contextSource);
}
public DirContextOperations searchForUser(String username) {
try {
return users.searchForUser(username);
} catch(UsernameNotFoundException e) {
return staff.searchForUser(username);
}
}
}
Then change the BindAuthenticator configuration to:
<bean class="org.springframework.security.ldap.authentication.BindAuthenticator">
<constructor-arg ref="contextSource"/>
<property name="userSearch" ref="customSearch"/>
</bean>
<bean id="customSearch" class="CustomLdapSearch">
<constructor-arg ref="contextSource"/>
</bean>
I've done something similar with spring-security-2.0.x using FilterBasedLdapUserSearch - where users were spread across multiple nodes:
<bean id="ldapUserSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch">
<constructor-arg value=""/> <!-- optional sub-tree here -->
<constructor-arg value="(&(sAMAccountName={0})(objectclass=user))"/>
<constructor-arg ref="contextSource"/>
</bean>
<bean id="ldapAuthProvider"
class="org.springframework.security.providers.ldap.LdapAuthenticationProvider">
<constructor-arg>
<bean class="org.springframework.security.providers.ldap.authenticator.BindAuthenticator">
<constructor-arg ref="contextSource"/>
<property name="userSearch" ref="ldapUserSearch"/>
</bean>
</constructor-arg>
<property name="userDetailsContextMapper" ref="userDetailsContextMapper"/>
</bean>
<bean id="contextSource"
class="org.springframework.security.ldap.DefaultSpringSecurityContextSource">
<constructor-arg value="ldap://localhost:10389/CN=Users,DC=my-domain,DC=com"/>
<!-- you may or may not need to connect with an account that can search -->
<!--<property name="userDn" value="uid=admin,ou=system"/>-->
<!--<property name="password" value="secret"/>-->
</bean>