Unable to delete documents in documentum application using DFC - documentum

I have written the following code with the approach given in EMC DFC 7.2 Development Guide. With this code, I'm able to delete only 50 documents even though there are more records. Before deletion, I'm taking the dump of object id. I'm not sure if there is any limit with IDfDeleteOperation. As this is deleting only 50 documents, I tried using DQL delete command, even there it is limited to 50 documents. I tried using destory() and destroyAllVersions() method that document has, even this didn't work for me. I have written everything in main method.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfLoginInfo;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
import java.io.BufferedWriter;
import java.io.FileWriter;
public class DeleteDoCAll {
public static void main(String[] args) throws DfException {
System.out.println("Started...");
IDfClientX clientX = new DfClientX();
IDfClient dfClient = clientX.getLocalClient();
IDfSessionManager sessionManager = dfClient.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser("username");
loginInfo.setPassword("password");
sessionManager.setIdentity("repo", loginInfo);
IDfSession dfSession = sessionManager.getSession("repo");
System.out.println(dfSession);
IDfDeleteOperation delo = clientX.getDeleteOperation();
IDfCancelCheckoutOperation cco = clientX.getCancelCheckoutOperation();
try {
String dql = "select r_object_id from my_report where folder('/Home', descend);
IDfQuery idfquery = new DfQuery();
IDfCollection collection1 = null;
try {
idfquery.setDQL(dql);
collection1 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
while(collection1 != null && collection1.next()) {
String r_object_id = collection1.getString("r_object_id");
StringBuilder attributes = new StringBuilder();
IDfDocument iDfDocument = (IDfDocument)dfSession.getObject(new DfId(r_object_id));
attributes.append(iDfDocument.dump());
BufferedWriter writer = new BufferedWriter(new FileWriter("path to file", true));
writer.write(attributes.toString());
writer.close();
cco.setKeepLocalFile(true);
IDfCancelCheckoutNode cnode;
if(iDfDocument.isCheckedOut()) {
if(iDfDocument.isVirtualDocument()) {
IDfVirtualDocument vdoc = iDfDocument.asVirtualDocument("CURRENT", false);
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
} else {
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
}
if(cnode == null) {
System.out.println("Node is null");
}
if(!cco.execute()) {
System.out.println("Cancel check out operation failed");
} else {
System.out.println("Cancelled check out for " + r_object_id);
}
}
delo.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
IDfDeleteNode node = (IDfDeleteNode)delo.add(iDfDocument);
if(node == null) {
System.out.println("Node is null");
System.out.println(i);
i += 1;
}
if(delo.execute()) {
System.out.println("Delete operation done");
System.out.println(i);
i += 1;
} else {
System.out.println("Delete operation failed");
System.out.println(i);
i += 1;
}
}
} finally {
if(collection1 != null) {
collection1.close();
}
}
} catch(Exception e) {
e.printStackTrace();
} finally {
sessionManager.release(dfSession);
}
}
}
I don't know where I'm making mistake, every time I try, the program stops at 50th iteration. Can you please help me to delete all documents in proper way? Thanks a lot!

At first select all document IDs into List<IDfId> for example and close the collection. Don't do another expensive operations inside of the opened collection, because you are then unnecessarily blocking it.
This is the cause why it did only 50 documents. Because you had one main opened collection and each execution of delete operation opened another collection and it probably reached some limit. So as I said it is better to consume the collection at first and then work further with those data:
List<IDfId> ids = new ArrayList<>();
try {
query.setDQL("SELECT r_object_id FROM my_report WHERE FOLDER('/Home', DESCEND)");
collection = query.execute(session, IDfQuery.DF_READ_QUERY);
while (collection.next()) {
ids.add(collection.getId("r_object_id"));
}
} finally {
if (collection != null) {
collection.close();
}
}
After that you can iterate through the list and do all actions with the document you need. But don't execute delete operation in each iteration - it is ineffective. Instead of it add all documents into one operation and execute it once at the end.
IDfDeleteOperation deleteOperation = clientX.getDeleteOperation();
deleteOperation.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
for (IDfId id : ids) {
IDfDocument document = (IDfDocument) session.getObject(id);
...
deleteOperation.add(document);
}
deleteOperation.execute();
The same is for the IDfCancelCheckoutOperation.
And another thing - when you are using FileWriter use close() in the finally block or use try-with-resources like this:
try (BufferedWriter writer = new BufferedWriter(new FileWriter("file.path", true))) {
writer.write(document.dump());
} catch (IOException e) {
throw new UncheckedIOException(e);
}
Using of StringBuilder is good idea, but create it only once at the beginning, append all attributes in each iteration and then write the content of the StringBuilder into the file at the end and not during each iteration - it is slow.

You could just do this from inside your code:
delete my_report objects where folder('/Home', descend)
no need to fetch information you are throwing away again ;-)

You're probably facing result set limit for DFC client.
Try adding to dfc.properties these lines and rerun your code to see if can delete more than 50 rows and after it adjust to your needs.
dfc.search.max_results = 100
dfc.search.max_results_per_source = 100

Related

throwing exception inside the java 8 stream foreach

I am using java 8 stream and I can not throw the exceptions inside the foreach of stream.
stream.forEach(m -> {
try {
if (isInitial) {
isInitial = false;
String outputName = new SimpleDateFormat(Constants.HMDBConstants.HMDB_SDF_FILE_NAME).format(new Date());
if (location.endsWith(Constants.LOCATION_SEPARATOR)) {
savedPath = location + outputName;
} else {
savedPath = location + Constants.LOCATION_SEPARATOR + outputName;
}
File output = new File(savedPath);
FileWriter fileWriter = null;
fileWriter = new FileWriter(output);
writer = new SDFWriter(fileWriter);
}
writer.write(m);
} catch (IOException e) {
throw new ChemIDException(e.getMessage(),e);
}
});
and this is my exception class
public class ChemIDException extends Exception {
public ChemIDException(String message, Exception e) {
super(message, e);
}
}
I am using loggers to log the errors in upper level. So I want to throw the exception to top. Thanks
Try extending RuntimeException instead. The method that is created to feed to the foreach does not have that type as throwable, so you need something that is runtime throwable.
WARNING: THIS IS PROBABLY NOT A VERY GOOD IDEA
But it will probably work.
Why are you using forEach, a method designed to process every element, when all you want to do, is to process the first element? Instead of realizing that forEach is the wrong method for the job (or that there are more methods in the Stream API than forEach), you are kludging this with an isInitial flag.
Just consider:
Optional<String> o = stream.findFirst();
if(o.isPresent()) try {
String outputName = new SimpleDateFormat(Constants.HMDBConstants.HMDB_SDF_FILE_NAME)
.format(new Date());
if (location.endsWith(Constants.LOCATION_SEPARATOR)) {
savedPath = location + outputName;
} else {
savedPath = location + Constants.LOCATION_SEPARATOR + outputName;
}
File output = new File(savedPath);
FileWriter fileWriter = null;
fileWriter = new FileWriter(output);
writer = new SDFWriter(fileWriter);
writer.write(o.get());
} catch (IOException e) {
throw new ChemIDException(e.getMessage(),e);
}
which has no issues with exception handling. This example assumes that the Stream’s element type is String. Otherwise, you have to adapt the Optional<String> type.
If, however, your isInitial flag is supposed to change more than once during the stream processing, you are definitely using the wrong tool for your job. You should have read and understood the “Stateless behaviors” and “Side-effects” sections of the Stream API documentation, as well as the “Non-interference” section, before using Streams. Just converting loops to forEach invocations on a Stream doesn’t improve the code.

How to solve a FolderClosedIOException?

So I am new to Apache Camel. I know that most of this code is probably not the most efficient way to do this, but I have made a code that uses Apache Camel to access my gmail, grab the new messages and if they have attachments save the attachments in a specified directory. My route saves the body data as a file in that directory. Everytime the DataHandler tries to use the getContent() method, whether its saving a file or trying to print the body to System.out, I get either a FolderClosedIOException or a FolderClosed Exception. I have not clue how to fix it. The catch reopens the folder but it just closes again after getting another message.
import org.apache.camel.*;
import java.io.*;
import java.util.*;
import javax.activation.DataHandler;
import javax.mail.Folder;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.impl.DefaultCamelContext;
import com.sun.mail.util.FolderClosedIOException;
public class Imap {
public static void main(String[] args) throws Exception {
CamelContext context = new DefaultCamelContext();
context.addRoutes(new RouteBuilder() {
public void configure() {
from("imaps://imap.gmail.com?username=********#gmail.com&password=******"
+ "&debugMode=false&closeFolder=false&mapMailMessage=false"
+ "&connectionTimeout=0").to("file:\\EMAIL");
}
});
Map<String,String> props = new HashMap<String,String>();
props.put("mail.imap.socketFactory.class","javax.net.ssl.SSLSocketFactory");
props.put("mail.imap.auth", "true");
props.put("mail.imap.host","imap.gmail.com");
props.put("mail.store.protocol", "imaps");
context.setProperties(props);
Folder inbox = null;
ConsumerTemplate template = context.createConsumerTemplate();
context.start();
while(true) {
try {
Exchange e = template.receive("imaps://imap.gmail.com?username=*********#gmail.com&password=***********", 60000);
if(e == null) break;
Message m = e.getIn();
Map<String, Object> s = m.getHeaders();
Iterator it = s.entrySet().iterator();
while(it.hasNext()) {
Map.Entry pairs = (Map.Entry)it.next();
System.out.println(pairs.getKey()+" === "+pairs.getValue()+"\n\n");
it.remove();
}
if(m.hasAttachments()) {
Map<String,DataHandler> att = m.getAttachments();
for(String s1 : att.keySet()) {
DataHandler dh = att.get(s1);
String filename = dh.getName();
ByteArrayOutputStream o = new ByteArrayOutputStream();
dh.writeTo(o);
byte[] by = o.toByteArray();
FileOutputStream out = new FileOutputStream("C:/EMAIL/"+filename);
out.write(by);
out.flush();
out.close();
}
}
} catch(FolderClosedIOException ex) {
inbox = ex.getFolder();
inbox.open(Folder.READ_ONLY);
}
}
context.stop();
}
}
Please somebody tell me whats wrong!!
The error occurs here:
dh.writeTo(o);
We were was solving a similar problem in akka-camel
The solution i believe was to use manual acknowledgement and send an acknowledgement after we were done with the message.

How to verify whether a link read from file is present on webpage or not?

I am new at automation. I have to write a code as follow
I have to read around 10 url's from a file and store it into one hashtable then I need to read one by one url's from hashtable and while iterating through this url I also need to read one more file conataining 3 url's and search them on webpage . If present need to click that link
I have written following code but I am not getting the logic for checking whether a link from file is present on webpage or not...
Please check my code and help me to solve/improve it.
Main test script
package com.samaritan.automation;
import java.util.Hashtable;
import java.util.Set;
import org.junit.Test;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
public class FirstScript {
WebDriver driver = new FirefoxDriver();
String data;
CommonControllers commonControll = null;
Hashtable<String, String> recruiters = null;
#Test
public void script() throws Exception {
CommonControllers commonControll = new CommonControllers();
recruiters = new Hashtable<String,String>();
recruiters = commonControll.readDataFromFile("D:/eRecruiters/_Recruiters.properties");
Set<String> keys = recruiters.keySet();
for(String key: keys){
/**HERE I NEED TO WRITE THE FUNCTION TO VERIFY WHETHER THE LINK READ FROM SECOND FILE IS PRESENT ON WEBPAGE OR NOT**/
}
}
}
and function to read from file into hashtable
public Hashtable<String, String> readDataFromFile(String fileName) {
try {
FileReader fr = new FileReader(fileName);
BufferedReader br = new BufferedReader(fr);
String strLine = null;
String []prop = null;
while((strLine = br.readLine()) != null) {
prop = strLine.split("\t");
recruiters.put(prop[0], prop[1]);
}
br.close();
fr.close();
}catch(Exception exception) {
System.out.println("Unable to read data from recruiter file: " + exception.getMessage());
}
return recruiters;
}
PLease take a look! thanks
Priya...You can use
if(isElementPresent(By.linkText(LinkTextFoundFromFile))){
//code when link text present there
}else {
//code for not finding the link
}
Now the following method is generalized for any By object you can use like By.xpath, By.id etc.
private boolean isElementPresent(By by) {
try {
driver.findElement(by);
return true;
} catch (NoSuchElementException e) {
return false;
}
}

Apache lucene 4.3.1 - Index reader is not reach to the last indexed document

In My App I have documents represents my data for each category, my application perform an automatic index to new and the modified documents.
if i performed index for all documents in one category, its work fine and retrieve a correct results, but the problem is, if i modified or create new document its will not retrieve it, if its matched my search query.
usually keeps return all docs except the last modified one.
any help please ?
I have this IndexWriter config :
private IndexWriter getIndexWriter() throws IOException {
Directory directory = FSDirectory.open(new File(filepath));
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_43, IndexFactory.ANALYZER);
config.setRAMBufferSizeMB(350);
TieredMergePolicy tmp = new TieredMergePolicy();
tmp.setUseCompoundFile(false);
config.setMergePolicy(tmp);
ConcurrentMergeScheduler scheduler = (ConcurrentMergeScheduler) config.getMergeScheduler();
scheduler.setMaxThreadCount(2);
scheduler.setMaxMergeCount(20);
IndexWriter writer = new IndexWriter(directory, config);
writer.forceMerge(1);
return writer;
My Collector :
public void collect(int docNum) throws IOException {
try {
if ((getCount() == getMaxSearchLimit() + 1) && getMaxSearchResults() != null) {
setCounterExceededLimit(true);
return;
}
addDocKey();// method to add and render the matching docs by customize way
} catch(IOException exp) {
if (!getErrors().toArrayList(getApplication().getLocale()).contains(exp.getMessage())) {
getErrors().addError(exp.getMessage());
}
} catch (BusinessException bEx) {
if (!getErrors().containsError(bEx.getErrorNumber())) {
getErrors().addError(bEx);
}
} catch (CounterExceededLimitException counterEx) {
return;
}
}
#Override
public boolean acceptsDocsOutOfOrder() {
// TODO Auto-generated method stub
return true;
}
#Override
public void setNextReader(AtomicReaderContext context) throws IOException {
// TODO Auto-generated method stub
}
#Override
public void setScorer(Scorer scorer) throws IOException {
// TODO Auto-generated method stub
}
acually i have this busniess logic to save my doc, then i asked if the doc saved successfully to add it to the index process.
public boolean saveDocument(CategoryDocument doc) {
boolean saved = false;
// code to save my doc
if(saved) {
//add this document to the index process
IndexManager.getInstance().addToIndex(this);
}
}
then my index manager create a new thread to handle indexing this doc.
here is my process to index my data document :
private void processDocument(IndexDocument indexDoc, DocKey docKey, boolean addToIndex) throws SearchException, BusinessException {
CategorySetting catSetting = docKey.getCategorySetting();
Integer catID = catSetting.getID();
IndexManager manager = IndexManager.getInstance();
IndexWriter writer = null;
try {
//Delete the lock file in case previous index operation failed to delete it
File lockFile = new File(filepath, IndexWriter.WRITE_LOCK_NAME);
if (lockFile != null && lockFile.exists()) {
lockFile.delete();
}
if(!manager.isGlobalIndexingProcess(catID)) {
writer = getIndexWriter();
} else {
writer = manager.getGlobalIndexWriter(catID);
}
writer.forceMerge(1);
removeDocument(docKey, writer);
if (addToIndex) {
writer.addDocument(indexDoc.getLuceneIndexDoc());
}
} catch(IOException exp) {
throw new SearchException(exp.getMessage(), true);
} finally {
if(!manager.isGlobalIndexingProcess(catID)) {
if (writer != null) {
try {
writer.close(true);
} catch(IOException ex) {
throw new SearchException(ex);
}
}
}
}
}
Use lucene search and search for the word or phrase that you edited in the document and let us know whether you get the correct hits or not. If you didn't get any hits then probably you are not indexing edited or newly added documents.

How to programmatically set the task outcome (task response) of a Nintex Flexi Task?

Is there any way of set a Nintex Flexi task completion through Sharepoint's web services? We have tried updating the "WorkflowOutcome", "ApproverComments" and "Status" fields without success (actually the comments and status are successfully updated, however I can find no way of updating the WorkflowOutcome system field).
I can't use the Nintex Web service (ProcessTaskResponse) because it needs the task's assigned user's credentials (login, password, domain).
The Asp.net page doesn't have that information, it has only the Sharepoint Administrator credentials.
One way is to delegate the task to the admin first, and then call ProcessTaskResponse, but it isn't efficient and is prone to errors.
In my tests so far, any update (UpdateListItems) to the WorkflowOutcome field automatically set the Status field to "Completed" and the PercentComplete field to "1" (100%), ending the task (and continuing the flow), but with the wrong answer: always "Reject", no matter what I try to set it to.
Did you try this code: (try-cacth block with redirection does the trick)
\\set to actual outcome id here, for ex. from OutComePanel control
taskItem[Nintex.Workflow.Common.NWSharePointObjects.FieldDecision] = 0;
taskItem[Nintex.Workflow.Common.NWSharePointObjects.FieldComments] = " Some Comments";
taskItem.Update();
try
{
Nintex.Workflow.Utility.RedirectOrCloseDialog(HttpContext.Current, Web.Url);
}
catch
{
}
?
Here are my code to change outcome of nintex flexi task. My problem is permission. I had passed admin token to site. It's solve the problem.
var siteUrl = "...";
using (var tempSite = new SPSite(siteUrl))
{
var sysToken = tempSite.SystemAccount.UserToken;
using (var site = new SPSite(siteUrl, sysToken))
{
var web = site.OpenWeb();
...
var cancelled = "Cancelled";
task.Web.AllowUnsafeUpdates = true;
Hashtable ht = new Hashtable();
ht[SPBuiltInFieldId.TaskStatus] = SPResource.GetString(new CultureInfo((int)task.Web.Language, false), Strings.WorkflowStatusCompleted, new object[0]);
ht["Completed"] = true;
ht["PercentComplete"] = 1;
ht["Status"] = "Completed";
ht["WorkflowOutcome"] = cancelled;
ht["Decision"] = CommonHelper.GetFlexiTaskOutcomeId(task, cancelled);
ht["ApproverComments"] = "cancelled";
CommonHelper.AlterTask((task as SPListItem), ht, true, 5, 100);
task.Web.AllowUnsafeUpdates = false;
}
}
}
}
}
}
public static string GetFlexiTaskOutcomeId(Microsoft.SharePoint.Workflow.SPWorkflowTask task, string outcome)
{
if (task["MultiOutcomeTaskInfo"] == null)
{
return string.Empty;
}
string xmlOutcome = HttpUtility.HtmlDecode(task["MultiOutcomeTaskInfo"].ToString());
if (string.IsNullOrEmpty(xmlOutcome))
{
return string.Empty;
}
XmlDocument doc = new XmlDocument();
doc.LoadXml(xmlOutcome);
var node = doc.SelectSingleNode(string.Format("/MultiOutcomeResponseInfo/AvailableOutcomes/ConfiguredOutcome[#Name='{0}']", outcome));
return node.Attributes["Id"].Value;
}
public static bool AlterTask(SPListItem task, Hashtable htData, bool fSynchronous, int attempts, int milisecondsTimeout)
{
if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1)
{
SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())];
SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]);
for (int i = 0; i < attempts; i++)
{
SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())];
if (!workflow.IsLocked)
{
task[SPBuiltInFieldId.WorkflowVersion] = 1;
task.SystemUpdate();
break;
}
if (i != attempts - 1)
{
Thread.Sleep(milisecondsTimeout);
}
}
}
var result = SPWorkflowTask.AlterTask(task, htData, fSynchronous);
return result;
}