Getting list of nodes for user - permissions

I have many 'nt:folder' Nodes created and upon each folder creation, permissions has been granted to different set of users.
Now I need to get list of nodes based on user(with read and write) persmissions.
Using jackrabbit 2.6.0
Partial snippet of user creation and privieges assignement:
User creation
UserManager userManager = ((JackrabbitSession) session).getUserManager();
org.apache.jackrabbit.api.security.user.User user =
(org.apache.jackrabbit.api.security.user.User)userManager.getAuthorizable(userName);
Add entry
javax.jcr.security.Privilege[] privileges = new
javax.jcr.security.Privilege[] {
accessControlManager.privilegeFromName(javax.jcr.security.Privilege.JCR_WRITE)
};
Temporary folder access
Map<String, Value> restrictions = new HashMap<String, Value>();
restrictions.put("rep:nodePath",
valueFactory.createValue(userDbInstance.getUserFilePath(),
PropertyType.PATH)); restrictions.put("rep:glob",
valueFactory.createValue("*"));
accessControlList.addEntry(userPrincipal, privileges, true /*allow or deny */, restrictions);
Adding Node
public Node addNode(String parent, String name, ETNodeTypes type) throws JCRServiceException {
checkSession();
try {
name = Text.escapeIllegalJcrChars(name);
logger.debug("Adding Node: " + parent + " type: " + type + " name(escaped):" + name);
Node node = session.getNode(parent).addNode(name, type.getName());
node.addMixin("rep:AccessControllable");
logger.debug("Node added: " + node.getPath());
return node;
} catch (RepositoryException e) {
e.printStackTrace();
throw new JCRServiceException(e,e.getMessage(),"Unable to create");
}
}
Thanks.

I recently posted on answering a similar question: Using JCR-SQL2 for querying ACLs in a Jackrabbit repository.
This was my example query:
select resource.*, ace.*
from [nt:hierarchyNode] as resource
inner join [rep:ACL] as acl
ON ISCHILDNODE(acl, resource)
inner join [rep:ACE] as ace
ON ISCHILDNODE(ace, acl)
where ace.[rep:principalName] = 'username'

Related

StreamingFileSink doesn't work sometimes when trying to write to S3

I am trying to write to a S3 sink.
private static StreamingFileSink<String> createS3SinkFromStaticConfig(
final Map<String, Properties> applicationProperties
) {
Properties sinkProperties = applicationProperties.get(SINK_PROPERTIES);
String s3SinkPath = sinkProperties.getProperty(SINK_S3_PATH_KEY);
return StreamingFileSink
.forRowFormat(
new Path(s3SinkPath),
new SimpleStringEncoder<String>(StandardCharsets.UTF_8.toString())
)
.build();
}
The following code works and I can see the results in S3
input.map(value -> { // Parse the JSON
JsonNode jsonNode = jsonParser.readValue(value, JsonNode.class);
return new Tuple2<>(jsonNode.get("ticker").asText(), jsonNode.get("price").asDouble());
}).returns(Types.TUPLE(Types.STRING, Types.DOUBLE))
.keyBy(0) // Logically partition the stream per stock symbol
.timeWindow(Time.seconds(10), Time.seconds(5)) // Sliding window definition
.min(1) // Calculate minimum price per stock over the window
.setParallelism(3) // Set parallelism for the min operator
.map(value -> value.f0 + ": ----- " + value.f1.toString() + "\n")
.addSink(createS3SinkFromStaticConfig(applicationProperties));
But the following doesn't write anything to S3.
KeyedStream<EnrichedMetric, EnrichedMetricKey> input = env.addSource(new EnrichedMetricSource())
.assignTimestampsAndWatermarks(
WatermarkStrategy.<EnrichedMetric>forMonotonousTimestamps()
.withTimestampAssigner(((event, l) -> event.getEventTime()))
).keyBy(new EnrichedMetricKeySelector());
DataStream<String> statsStream = input
.window(TumblingEventTimeWindows.of(Time.seconds(5)))
.process(new PValueStatisticsWindowFunction());
statsStream.addSink(createS3SinkFromStaticConfig(applicationProperties));
PValueStatisticsWindowFunction is a ProcessWindowFunction as below.
#Override
public void process(EnrichedMetricKey enrichedMetricKey,
Context context,
Iterable<EnrichedMetric> in,
Collector<String> out) throws Exception {
int count = 0;
for (EnrichedMetric m : in) {
count++;
}
out.collect("Count: " + count);
}
When I run the Flink app locally, statsStream.print() prints the results to log/flink-*-taskexecutor-*.out.
In the cluster, I can see checkpoint is enabled and the various checkpoints history from the Flink dashboard. I also made sure the S3 path is in the format s3a://<bucket>
Not sure what I am missing here.

ConnectionSpecWrapper no longer present in recent releases

Why the activejdbc class ConnectionSpecWrapper has disappeared in recent releases?
in the 3.0 (and also 2.3.2-j8) activejdbc jar we have:
org/javalite/activejdbc/connection_config/ConnectionJndiConfig.class
org/javalite/activejdbc/connection_config/ConnectionConfig.class
org/javalite/activejdbc/connection_config/ConnectionJdbcConfig.class
org/javalite/activejdbc/connection_config/ConnectionDataSourceConfig.class
org/javalite/activejdbc/connection_config/DBConfiguration.class
In 2.3 jar we have
org/javalite/activejdbc/connection_config/ConnectionSpecWrapper.class
org/javalite/activejdbc/connection_config/DbConfiguration.class
org/javalite/activejdbc/connection_config/ConnectionJdbcSpec.class
org/javalite/activejdbc/connection_config/ConnectionSpec.class
org/javalite/activejdbc/connection_config/ConnectionDataSourceSpec.class
org/javalite/activejdbc/connection_config/ConnectionJndiSpec.class
I am using it like this, in a filter:
#Override
public void before() {
if(Configuration.isTesting())
return;
List<ConnectionSpecWrapper> connectionWrappers = getConnectionWrappers();
if (connectionWrappers.isEmpty()) {
throw new InitException("There are no connection specs in '" + Configuration.getEnv() + "' environment");
}
for (ConnectionSpecWrapper connectionWrapper : connectionWrappers) {
DB db = new DB(connectionWrapper.getDbName());
db.open(connectionWrapper.getConnectionSpec());
log.debug("Opened connection: " + connectionWrapper.getDbName() + " envname " + connectionWrapper.getEnvironment());
if(manageTransaction){
db.openTransaction();
}
}
}
#Override
public void after() {
if(Configuration.isTesting())
return;
List<ConnectionSpecWrapper> connectionWrappers = getConnectionWrappers();
if (connectionWrappers != null && !connectionWrappers.isEmpty()) {
for (ConnectionSpecWrapper connectionWrapper : connectionWrappers) {
DB db = new DB(connectionWrapper.getDbName());
if(manageTransaction){
db.commitTransaction();
}
db.close();
log.debug("Closed connection: " + connectionWrapper.getDbName() + " envname " + connectionWrapper.getEnvironment());
}
}
}
I'm thinking of upgrading the Gazzetta dello Sport's fantasy football site which has been live for something like 8 years and working really well. It is on Java 7/Activeweb 1.10/Activejdbc 1.4.9
The "wrapper" classes have been renamed into "Spec" classes, as you rightly noticed. Generally these classes are not used. If you want to continue using them you can of course (rename accordingly). However, a better approach is to define your connections in a file:
https://javalite.io/database_configuration#property-file-configuration
and simply use https://javalite.io/controller_filters#dbconnectionfilter.
I'm assuming you wrote a custom controller filter and are using ActiveWeb.
Update:
Now that we established you use ActivewWeb, consider removing your code and simply using a DBConnectionFilter, here is a perfect example: https://github.com/javalite/javalite-examples/blob/master/activeweb-simple/src/main/java/app/config/AppControllerConfig.java#L31

Apache-ignite: Persistent Storage

My understanding for Ignite Persistent Storage is that the data is not only saved in memory, but also written to disk.
When the node is restarted, it should read the data from disk to memory.
So, I am using this example to test it out. But I update it a little bit because I don't want to use xml.
This is my slightly updated code.
public class PersistentIgniteExpr {
/**
* Organizations cache name.
*/
private static final String ORG_CACHE = "CacheQueryExample_Organizations";
/** */
private static final boolean UPDATE = true;
public void test(String nodeId) {
// Apache Ignite node configuration.
IgniteConfiguration cfg = new IgniteConfiguration();
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new DataStorageConfiguration();
// Enabling the persistence.
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);
List<String> addresses = new ArrayList<>();
addresses.add("127.0.0.1:47500..47502");
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setIpFinder(new TcpDiscoveryMulticastIpFinder().setAddresses(addresses));
cfg.setDiscoverySpi(tcpDiscoverySpi);
try (Ignite ignite = Ignition.getOrStart(cfg.setIgniteInstanceName(nodeId))) {
// Activate the cluster. Required to do if the persistent store is enabled because you might need
// to wait while all the nodes, that store a subset of data on disk, join the cluster.
ignite.active(true);
CacheConfiguration<Long, Organization> cacheCfg = new CacheConfiguration<>(ORG_CACHE);
cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cacheCfg.setBackups(1);
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
cacheCfg.setIndexedTypes(Long.class, Organization.class);
IgniteCache<Long, Organization> cache = ignite.getOrCreateCache(cacheCfg);
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
// Run SQL without explicitly calling to loadCache().
QueryCursor<List<?>> cur = cache.query(
new SqlFieldsQuery("select id, name from Organization where name like ?")
.setArgs("organization-54321"));
System.out.println("SQL Result: " + cur.getAll());
// Run get() without explicitly calling to loadCache().
Organization org = cache.get(54321l);
System.out.println("GET Result: " + org);
}
}
}
When I run the first time, it works as intended.
After running it one time, I am assuming that data is written to disk since the code is about persistent storage.
When I run the second time, I commented out this part.
if (UPDATE) {
System.out.println("Populating the cache...");
try (IgniteDataStreamer<Long, Organization> streamer = ignite.dataStreamer(ORG_CACHE)) {
streamer.allowOverwrite(true);
for (long i = 0; i < 100_000; i++) {
streamer.addData(i, new Organization(i, "organization-" + i));
if (i > 0 && i % 10_000 == 0)
System.out.println("Done: " + i);
}
}
}
That is the part where data is written. When the sql query is executed, it is returning null. That means data is not written to disk?
Another question is I am not very clear about TcpDiscoverySpi. Can someone explain about it as well?
Thanks in advance.
Do you have any exceptions at node startup?
Very probably, you don't have IGNITE_HOME env variable configured. And the Work Directory for persistence is chosen somehow differently each time you run a node.
You can either setup IGNITE_HOME env variable or add a code line to setup workDirectory explicitly: cfg.setWorkDirectory("C:\\workDirectory");
TcpDiscoverySpi provides a way to discover remote nodes in a grid, so the starting node can join a cluster. It is better to use TcpDiscoveryVmIpFinder if you know the list of IPs. TcpDiscoveryMulticastIpFinder broadcasts UDP messages to a network to discover other nodes. It does not require IPs list at all.
Please see https://apacheignite.readme.io/docs/cluster-config for more details.

Apache Shiro login failed using JDBC Realm

I am trying to connect to oracle DB .
I want to retrieve list of passwords from data base using the authentication query. Here is my sample shiro.ini file:
# password matcher
passwordMatcher = org.apache.shiro.authc.credential.PasswordMatcher
passwordService = org.apache.shiro.authc.credential.DefaultPasswordService
passwordMatcher.passwordService = $passwordService
# datasource
ds = oracle.jdbc.pool.OracleDataSource
ds.URL = jdbc:oracle:thin:#matrix-oracle11g:1521:dev11g
ds.user = cit1am
ds.password = cit1
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealm.permissionsLookupEnabled = true
jdbcRealm.authenticationQuery = SELECT USR_PSWD FROM USR
jdbcRealm.credentialsMatcher = $passwordMatcher
jdbcRealm.dataSource = $ds
securityManager.realms = $jdbcRealm
[users]
[roles]
[urls]
Sample code snippet of login:
public class Quickstart {
private static final transient Logger log = LoggerFactory.getLogger(Quickstart.class);
public static void main(String[] args) {
Factory<SecurityManager> factory = new IniSecurityManagerFactory("classpath:shiro.ini");
SecurityManager securityManager = factory.getInstance();
SecurityUtils.setSecurityManager(securityManager);
Subject currentUser = SecurityUtils.getSubject();
// Do some stuff with a Session (no need for a web or EJB container!!!)
Session session = currentUser.getSession();
session.setAttribute("someKey", "aValue");
String value = (String) session.getAttribute("someKey");
if (value.equals("aValue")) {
log.info("Retrieved the correct value! [" + value + "]");
}
try{
// let's login the current user so we can check against roles and permissions:
if (!currentUser.isAuthenticated()) {
UsernamePasswordToken token = new UsernamePasswordToken("cit1am", "cit1") ;
token.setRememberMe(true);
try {
currentUser.login(token); //problem occurs here
log.info("inside try block ==========>>" );
}
catch (UnknownAccountException uae) {
log.info("There is no user with username of " + token.getPrincipal());
}
I am getting following error:
[main] ERROR org.apache.shiro.realm.jdbc.JdbcRealm - There was a SQL error while authenticating user [cit1am]
java.sql.SQLException: Invalid column index
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:271)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:445)
Please suggest what i am doing wrong?
After debugging more i found issue with my code and sql query in .ini file.
I changed following in .INI file
jdbcRealm.authenticationQuery = SELECT USR_PSWD FROM USR where USR_NM = ?
Also commented
#cm = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
#jdbcRealm.credentialsMatcher = $cm and removedconfiguration related to password matcher
I also removed role and permission check from java code.
As i have just started with shrio it's bit difficult to understand flow at start.
Though it can help some one in future.
Thanks

Adding roles to users in team area in RTC

I need to add users ( users are already present in repository. I only need to add them.) and roles from a CSV file to team areas. Project area and Team Area already exists.I could successfully add users but not the roles from csv file.
The CSV file format is :
Project name,Team Area name,Members,roles
Project1,User_Role_TA,Alex,Team Member
Project2,TA2,David,Scrum Master
Below is the code for it. It successfully add the users and currently add roles to them from project area but I need to add roles to the users from CSV file. In the below code, If I can get roles from csv file in the line "IRole[] availableRoles = clientProcess.getRoles(area, null);" , I think it should resolve the issue. I am not getting any error but it doesn't add the roles.
while((row = CSVFileReader.readLine()) != null )
{
rowNumber++;
st = new StringTokenizer(row,",");
while (st.hasMoreTokens()) {
projectAreaList.add(st.nextToken());
teamAreaList.add(st.nextToken());
membersList.add(st.nextToken());
roleList.add(st.nextToken());
}
}
for (int i=1; i<rowNumber; i++)
{
projectAreaName = projectAreaList.get(i);
teamAreaName = teamAreaList.get(i);
members = membersList.get(i);
member_roles =roleList.get(i);
URI uri = URI.create(projectAreaName.replaceAll(" ", "%20"));
IProjectArea projectArea = (IProjectArea) processClient.findProcessArea(uri, null, null);
if (projectArea == null)
{
System.out.println("Project Area not found");
}
if (!teamAreaName.equals("NULL")){
List <TeamAreaHandle> teamlist = projectArea.getTeamAreas();
ITeamAreaHandle newTAHandle = findTeamAreaByName(teamlist,teamAreaName,monitor);
if(newTAHandle == null) {
System.out.println("Team Area not found");
}
else {
ITeamArea TA = (ITeamArea)teamRepository.itemManager().fetchCompleteItem(newTAHandle,ItemManager.DEFAULT,monitor);
IRole role = getRole(projectArea);
IContributor user = teamRepository.contributorManager().fetchContributorByUserId(members,monitor);
/*role1 = getRole(area).getId();
if(role1.equalsIgnoreCase(member_roles))
{
user_role = getRole(area);
}*/
IProcessAreaWorkingCopy areaWc = (IProcessAreaWorkingCopy)service.getWorkingCopyManager().createPrivateWorkingCopy(TA);
areaWc.getTeam().addContributorsSettingRoleCast(
new IContributor[] {user},
new IRole[] {role});
areaWc.save(monitor);
}
public static IRole getRole(IProcessArea area) throws TeamRepositoryException {
ITeamRepository repo = (ITeamRepository) area.getOrigin();
IProcessItemService service =(IProcessItemService) repo
.getClientLibrary(IProcessItemService.class);
IClientProcess clientProcess = service.getClientProcess(area, null);
IRole[] availableRoles = clientProcess.getRoles(area, null);
for (int i = 0; i < availableRoles.length; i++) {
return availableRoles[i];
}
throw new IllegalArgumentException("Couldn't find role");
}
Some of the API you are trying to use are private in RTC3.x
See this thread for different options (a bit similar to your code):
ProjectAreaWorkingCopy workingCopy = (ProjectAreaWorkingCopy)manager.getWorkingCopy(project);
this class extends to ProcessAreaWorkingCopy
public class ProjectAreaWorkingCopy extends ProcessAreaWorkingCopy implements IProjectAreaWorkingCopy
In ProcessAreaWorkingCopy setRoleCast retrieves the team and sets the role.
One can set the role at the team level via
team.setRoleCast(contributor, roleCast);
# or
projWc.getTeam().addContributorsSettingRoleCast(new IContributor[] {contributor}, roles);
The OP Kaushambi Suyal reports:
Created a method as mentioned in the thread with few changes and it worked.
Also we need to pass the process area here and not the project area, because I am trying to add roles to users in team area and not project area.