Adding new revision for document in DropBox through android api - dropbox

I want to add a new revision to the document(Test.doc) in Dropbox using android api. Can anyone share me any sample code or links. I tried
FileInputStream inputStream = null;
try {
DropboxInputStream temp = mDBApi.getFileStream("/Test.doc", null);
String revision = temp.getFileInfo().getMetadata().rev;
Log.d("REVISION : ",revision);
File file = new File("/sdcard0/renamed.doc");
inputStream = new FileInputStream(file);
Entry newEntry = mDBApi.putFile("/Test.doc", inputStream, file.length(), revision, new ProgressListener() {
#Override
public void onProgress(long arg0, long arg1) {
Log.d("","Uploading.. "+arg0+", Total : "+arg1);
}
});
} catch (Exception e) {
System.out.println("Something went wrong: " + e);
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (IOException e) {}
}
}
New revision is created for first time. When i execute again, another new revision is not getting created.

Related

Dropbox Java Api Upload File

How do I upload a file public and get link ? I am using Dropbox Java core api. Here.
public static void Yukle(File file) throws DbxException, IOException {
FileInputStream fileInputStream = new FileInputStream(file);
InputStream inputStream = fileInputStream;
try (InputStream in = new FileInputStream(file)) {
UploadBuilder metadata = clientV2.files().uploadBuilder("/"+file.getName());
metadata.withMode(WriteMode.OVERWRITE);
metadata.withClientModified(new Date());
metadata.withAutorename(false);
metadata.uploadAndFinish(in);
System.out.println(clientV2.files());
}
}
I use the following code to upload files to DropBox:
public DropboxAPI.Entry uploadFile(final String fullPath, final InputStream is, final long length, final boolean replaceFile) {
final DropboxAPI.Entry[] rev = new DropboxAPI.Entry[1];
rev[0] = null;
Thread t = new Thread(new Runnable() {
public void run() {
try {
if (replaceFile == true) {
try {
mDBApi.delete(fullPath);
} catch (Exception e) {
e.printStackTrace();
}
//! ReplaceFile is always true
rev[0] = mDBApi.putFile(fullPath, is, length, null, true, null);
} else {
rev[0] = mDBApi.putFile(fullPath, is, length, null, null);
}
} catch (DropboxException e) {
e.printStackTrace();
}
}
});
t.start();
try {
t.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
return rev[0];
}

Downloading a file from Google cloud storage is corrupted randomly

I am trying to download data Bigquery data through Google cloud storage. Am able to send data from BigQuery to GCS but when downloading data from GCS to load the files are corrupted randomly.
getObject.getMediaHttpDownloader().setDirectDownloadEnabled(true);
out = fs.create(pathDir, true);
getObject.executeMediaAndDownloadTo(out);
boolean match= ismd5HashValid(o.getMd5Hash(), pathDir);
and to check md5 checksum
private boolean ismd5HashValid(String md5hash, String path) {
org.apache.hadoop.fs.Path pathDir = new org.apache.hadoop.fs.Path(path);
org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
InputStream is = null;
try {
FileSystem fs = FileSystem.get(conf);
MessageDigest md = MessageDigest.getInstance("MD5");
is = fs.open(pathDir);
byte[] bytes = new byte[1024];
int numBytes;
while ((numBytes = is.read(bytes)) != -1) {
md.update(bytes, 0, numBytes);
}
byte[] digest = md.digest();
String result = new String(Base64.encodeBase64(digest));
Log.info("Source file md5hash {} Downloaded file md5hash {}", md5hash, result);
if (md5hash.equals(result)) {
Log.info("md5hash check is valid");
return true;
}
} catch (IOException e) {
// TODO Auto-generated catch block
Log.warn(e.getMessage(), e);
} catch (NoSuchAlgorithmException e) {
// TODO Auto-generated catch block
Log.warn(e.getMessage(), e);
} finally {
IOUtils.closeQuietly(is);
}
return false;
}

Apache lucene 4.3.1 - Index reader is not reach to the last indexed document

In My App I have documents represents my data for each category, my application perform an automatic index to new and the modified documents.
if i performed index for all documents in one category, its work fine and retrieve a correct results, but the problem is, if i modified or create new document its will not retrieve it, if its matched my search query.
usually keeps return all docs except the last modified one.
any help please ?
I have this IndexWriter config :
private IndexWriter getIndexWriter() throws IOException {
Directory directory = FSDirectory.open(new File(filepath));
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_43, IndexFactory.ANALYZER);
config.setRAMBufferSizeMB(350);
TieredMergePolicy tmp = new TieredMergePolicy();
tmp.setUseCompoundFile(false);
config.setMergePolicy(tmp);
ConcurrentMergeScheduler scheduler = (ConcurrentMergeScheduler) config.getMergeScheduler();
scheduler.setMaxThreadCount(2);
scheduler.setMaxMergeCount(20);
IndexWriter writer = new IndexWriter(directory, config);
writer.forceMerge(1);
return writer;
My Collector :
public void collect(int docNum) throws IOException {
try {
if ((getCount() == getMaxSearchLimit() + 1) && getMaxSearchResults() != null) {
setCounterExceededLimit(true);
return;
}
addDocKey();// method to add and render the matching docs by customize way
} catch(IOException exp) {
if (!getErrors().toArrayList(getApplication().getLocale()).contains(exp.getMessage())) {
getErrors().addError(exp.getMessage());
}
} catch (BusinessException bEx) {
if (!getErrors().containsError(bEx.getErrorNumber())) {
getErrors().addError(bEx);
}
} catch (CounterExceededLimitException counterEx) {
return;
}
}
#Override
public boolean acceptsDocsOutOfOrder() {
// TODO Auto-generated method stub
return true;
}
#Override
public void setNextReader(AtomicReaderContext context) throws IOException {
// TODO Auto-generated method stub
}
#Override
public void setScorer(Scorer scorer) throws IOException {
// TODO Auto-generated method stub
}
acually i have this busniess logic to save my doc, then i asked if the doc saved successfully to add it to the index process.
public boolean saveDocument(CategoryDocument doc) {
boolean saved = false;
// code to save my doc
if(saved) {
//add this document to the index process
IndexManager.getInstance().addToIndex(this);
}
}
then my index manager create a new thread to handle indexing this doc.
here is my process to index my data document :
private void processDocument(IndexDocument indexDoc, DocKey docKey, boolean addToIndex) throws SearchException, BusinessException {
CategorySetting catSetting = docKey.getCategorySetting();
Integer catID = catSetting.getID();
IndexManager manager = IndexManager.getInstance();
IndexWriter writer = null;
try {
//Delete the lock file in case previous index operation failed to delete it
File lockFile = new File(filepath, IndexWriter.WRITE_LOCK_NAME);
if (lockFile != null && lockFile.exists()) {
lockFile.delete();
}
if(!manager.isGlobalIndexingProcess(catID)) {
writer = getIndexWriter();
} else {
writer = manager.getGlobalIndexWriter(catID);
}
writer.forceMerge(1);
removeDocument(docKey, writer);
if (addToIndex) {
writer.addDocument(indexDoc.getLuceneIndexDoc());
}
} catch(IOException exp) {
throw new SearchException(exp.getMessage(), true);
} finally {
if(!manager.isGlobalIndexingProcess(catID)) {
if (writer != null) {
try {
writer.close(true);
} catch(IOException ex) {
throw new SearchException(ex);
}
}
}
}
}
Use lucene search and search for the word or phrase that you edited in the document and let us know whether you get the correct hits or not. If you didn't get any hits then probably you are not indexing edited or newly added documents.

save image file in specific directory jsf primefaces project

I want to save byte[] file into a specific directory :
I get it from this method :
public void setUploadedPicture(UploadedFile uploadedPicture)
{
System.out.println("set : "+uploadedPicture.getFileName()+" size : "+uploadedPicture.getSize());
this.uploadedPicture = uploadedPicture;
}
and I access the byte[] with :
uploadedPicture.getContents()
I tested this link but no result
how to save it into a specific directory either inside my project or outside
thank you
*********EDIT**********
here is the code whic works but sometimes I have the error :
public void setUploadedPicture(UploadedFile uploadedPicture)
{
System.out.println("set : "+uploadedPicture.getFileName()+" size : "+uploadedPicture.getSize());
this.uploadedPicture = uploadedPicture;
InputStream inputStr = null;
try {
inputStr = uploadedPicture.getInputstream();
} catch (IOException e) {
e.printStackTrace();
}
//create destination File
String destPath = "C:\\"+uploadedPicture.getFileName();
File destFile = new File(destPath);
//use org.apache.commons.io.FileUtils to copy the File
try {
FileUtils.copyInputStreamToFile(inputStr, destFile);
} catch (IOException e) {
e.printStackTrace();
}
}
public void handleFileUpload(FileUploadEvent event) {
//get uploaded file from the event
UploadedFile uploadedFile = (UploadedFile)event.getFile();
//create an InputStream from the uploaded file
InputStream inputStr = null;
try {
inputStr = uploadedFile.getInputstream();
} catch (IOException e) {
//log error
}
//create destination File
String destPath = "your path here";
File destFile = new File(destPath);
//use org.apache.commons.io.FileUtils to copy the File
try {
FileUtils.copyInputStreamToFile(inputStr, destFile);
} catch (IOException e) {
//log error
}
}

Managing trace files on Sql Server 2005

I need to manage the trace files for a database on Sql Server 2005 Express Edition. The C2 audit logging is turned on for the database, and the files that it's creating are eating up a lot of space.
Can this be done from within Sql Server, or do I need to write a service to monitor these files and take the appropriate actions?
I found the [master].[sys].[trace] table with the trace file properties. Does anyone know the meaning of the fields in this table?
Here's what I came up with that is working pretty good from a console application:
static void Main(string[] args)
{
try
{
Console.WriteLine("CcmLogManager v1.0");
Console.WriteLine();
// How long should we keep the files around (in months) 12 is the PCI requirement?
var months = Convert.ToInt32(ConfigurationManager.AppSettings.Get("RemoveMonths") ?? "12");
var currentFilePath = GetCurrentAuditFilePath();
Console.WriteLine("Path: {0}", new FileInfo(currentFilePath).DirectoryName);
Console.WriteLine();
Console.WriteLine("------- Removing Files --------------------");
var fileInfo = new FileInfo(currentFilePath);
if (fileInfo.DirectoryName != null)
{
var purgeBefore = DateTime.Now.AddMonths(-months);
var files = Directory.GetFiles(fileInfo.DirectoryName, "audittrace*.trc.zip");
foreach (var file in files)
{
try
{
var fi = new FileInfo(file);
if (PurgeLogFile(fi, purgeBefore))
{
Console.WriteLine("Deleting: {0}", fi.Name);
try
{
fi.Delete();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
}
Console.WriteLine("------- Files Removed ---------------------");
Console.WriteLine();
Console.WriteLine("------- Compressing Files -----------------");
if (fileInfo.DirectoryName != null)
{
var files = Directory.GetFiles(fileInfo.DirectoryName, "audittrace*.trc");
foreach (var file in files)
{
// Don't attempt to compress the current log file.
if (file.ToLower() == fileInfo.FullName.ToLower())
continue;
var zipFileName = file + ".zip";
var fi = new FileInfo(file);
var zipEntryName = fi.Name;
Console.WriteLine("Zipping: \"{0}\"", fi.Name);
try
{
using (var fileStream = File.Create(zipFileName))
{
var zipFile = new ZipOutputStream(fileStream);
zipFile.SetLevel(9);
var zipEntry = new ZipEntry(zipEntryName);
zipFile.PutNextEntry(zipEntry);
using (var ostream = File.OpenRead(file))
{
int bytesRead;
var obuffer = new byte[2048];
while ((bytesRead = ostream.Read(obuffer, 0, 2048)) > 0)
zipFile.Write(obuffer, 0, bytesRead);
}
zipFile.Finish();
zipFile.Close();
}
fi.Delete();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
}
Console.WriteLine("------- Files Compressed ------------------");
Console.WriteLine();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
Console.WriteLine("Press any key...");
Console.ReadKey();
}
public static bool PurgeLogFile(FileInfo fi, DateTime purgeBefore)
{
try
{
var filename = fi.Name;
if (filename.StartsWith("audittrace"))
{
filename = filename.Substring(10, 8);
var year = Convert.ToInt32(filename.Substring(0, 4));
var month = Convert.ToInt32(filename.Substring(4, 2));
var day = Convert.ToInt32(filename.Substring(6, 2));
var logDate = new DateTime(year, month, day);
return logDate.Date <= purgeBefore.Date;
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
return false;
}
public static string GetCurrentAuditFilePath()
{
const string connStr = "Data Source=.\\SERVER;Persist Security Info=True;User ID=;Password=";
var dt = new DataTable();
var adapter =
new SqlDataAdapter(
"SELECT path FROM [master].[sys].[traces] WHERE path like '%audittrace%'", connStr);
try
{
adapter.Fill(dt);
if (dt.Rows.Count >= 1)
{
if (dt.Rows.Count > 1)
Console.WriteLine("More than one audit trace file defined! Count: {0}", dt.Rows.Count);
var path = dt.Rows[0]["path"].ToString();
return path.StartsWith("\\\\?\\") ? path.Substring(4) : path;
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
throw new Exception("No Audit Trace File in sys.traces!");
}
You can also set up SQL Trace to log to a SQL table. Then you can set up a SQL Agent task to auto-truncate records.
sys.traces has a record for every trace started on the server. Since SQL Express does not have Agent and cannot set up jobs, you'll need an external process or service to monitor these. You'll have to roll your own everything (monitoring, archiving, trace retention policy etc). If you have C2 audit in place, I assume you have policies in place that determine the duration audit has to be retained.