Axis2 temp file - axis2

I am using the Axis2 1.6. On each request client is genetrating the temp files and thus leading to disk space problem.
Can I request some one to pint me to some article how this problem can be addressed.
Regards,
Amber

I ran into this issue with Axis2 1.6.3. I'm using the Addressing module (org.apache.axis2:addressing:1.6.3:jar:classpath-module).
My entire shaded JAR was getting copied into the temp directory and not deleted if the Java process ever crashed, so disk usage grew unchecked.
My approach to solving this was to manually register the module with Axis2, instead of letting it automatically register, so it skips writing the temp JAR to disk.
I used the logic from Axis2's ModuleDeployer.deploy() and DeploymentEngine.addNewModule().
Old code which creates temp files
ConfigurationContext context = ConfigurationContextFactory.createConfigurationContextFromFileSystem(null, null);
MyExampleStub myExampleStub = new MyExampleStub(context, mySoapUri);
myExampleStub._getServiceClient().engageModule("addressing");
New code which doesn't create temp files
ConfigurationContext context = ConfigurationContextFactory.createDefaultConfigurationContext();
AxisConfiguration axisConfiguration = context.getAxisConfiguration();
AxisModule addressing = new AxisModule("addressing");
addressing.setParent(axisConfiguration);
addressing.setModuleClassLoader(getClass().getClassLoader());
InputStream moduleXmlInputStream = getClass().getResourceAsStream("META-INF/module.xml");
new ModuleBuilder(moduleXmlInputStream, addressing, axisConfiguration).populateModule();
DeploymentEngine.addNewModule(addressing, axisConfiguration);
MyExampleStub myExampleStub = new MyExampleStub(context, mySoapUri);
myExampleStub._getServiceClient().engageModule("addressing");
Now I still see the expected <wsa:To>, <wsa:MessageID>, and <wsa:Action> elements in my outgoing request <Header>, but my temp directory doesn't contain any axis2-tmp-6160203768737879650.tmp directories or axis2-tmp-6160203768737879650.tmp.lck files.

This seams to be a bug fixed by version 1.7.0.

Related

how to use very old iText(under 0.99) to create bookmarks / outlines?

may I know how to use old iText(very old version under 0.99, package path = com.lowagie.xxx) to create bookmarks to jump in the internal pdf pls?
like the api in new iText jar:
PdfOutline outoline2 = com.itextpdf.pdf.PdfAction.gotoLocalPage("destinationName", false)
we have found below code to create bookmark, but find old iText needs to use the filename(see outFileName in below code). but what we want is a jump in internal pdf (not remote pdf)
olineSignature = new PdfOutline(root, new PdfAction(outFileName, "Signature2TxtDestination"), "Signature2TxtOutline");
FYI, we don't know what page number in advance, so no way to use the api as below: old PdfAction.gotoLocalPage(int, PdfDestination, PdfWriter)
anybody can help me? Thanks.#Bruno Lowagie, #itext :)
We are in the progress of upgrading to new iText(itext5+), but now we do get a request to create bookmarks(using old iText) for others to retrieve the created bookmarks.
My memory can't go that far back but local destinations are most probably not supported. Your only chance is to do an interim upgrade to the Jurassic 2.1.7 that should be more or less compatible with that Pleistocene 0.99.

Hadoop DistributedCache caching files without absolute path?

I am in the process of migrating to YARN and it seems the behavior of the DistributedCache changed.
Previously, I would add some files to the cache as follows:
for (String file : args) {
Path path = new Path(cache_root, file);
URI uri = new URI(path.toUri().toString());
DistributedCache.addCacheFile(uri, conf);
}
The path would typically look like
/some/path/to/my/file.txt
Which pre-exists on HDFS and would essentially end up in the DistributedCache as
/$DISTRO_CACHE/some/path/to/my/file.txt
I could symlink to it in my current working directory and use with DistributedCache.getLocalCacheFiles()
With YARN, it seems this file instead ends up in the cache as:
/$DISTRO_CACHE/file.txt
ie, the 'path' part of the file URI got dropped and only the filename remains.
How does with work with different absolute paths ending up with the same filename? Consider the following case:
DistributedCache.addCacheFile("some/path/to/file.txt", conf);
DistributedCache.addCacheFile("some/other/path/to/file.txt", conf);
Arguably someone could use fragments:
DistributedCache.addCacheFile("some/path/to/file.txt#file1", conf);
DistributedCache.addCacheFile("some/other/path/to/file.txt#file2", conf);
But this seems unnecessarily harder to manage. Imagine the scenario where those are command-line arguments, you somehow need to manage that those 2 filenames, although different absolute paths would definitely clash in the DistributedCache and therefore need to re-map these filenames to fragments and propagate as such to the rest of the program?
Is there an easier way to manage this?
Try to add files into Job
It's most likely how you're actually configuring the job and then accessing them in the Mapper.
When you're setting up the job you're going to do something like
job.addCacheFile(new Path("cache/file1.txt").toUri());
job.addCacheFile(new Path("cache/file2.txt").toUri());
Then in your mapper code the urls are going to be stored in an array which can be accessed like so.
URI file1Uri = context.getCacheFiles()[0];
URI file2Uri = context.getCacheFiles()[1];
Hope this could help you.

Why does GetBasicPropertiesAsync() sometimes throw an Exception?

In Windows8, I'm trying to use GetBasicPropertiesAsync() to get the size of a newly created file. Sometimes, but not always (~25% of the time), this call gives an exception of:
"Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))".
The file is created using DotNetZip. I'm adding thousands of files to the archive which takes a few minutes to run:
using (ZipFile zip = new ZipFile())
{
zip.AddFile(...); // for thousands of files
zip.Save(cr.ArchiveName);
}
var storageFile = await subFolder.GetFileAsync(cr.ArchiveName);
// storageFile is valid at this point
var basicProperties = await storageFile.GetBasicPropertiesAsync(); // BOOM!
A few apparently random things seem to decrease the likelihood of the exception:
Deleting an existing copy of cr.ArchiveName before the start of the loop.
Not viewing the directory using File Explorer
Weird, huh? It smells like it might be a bug related to File System Tunneling or maybe it's some internal caching that DotNetZip is performing and holding onto resources (maybe renaming the TEMP file) even after the ZipFile is disposed?
Trying to (unsuccessfully) answer my own question.
At first, I though this was a known issue with DotNetZip holding onto file handles until the next garbage collection. I am using the SL/WP7 port of DotNetZip from http://slsharpziplib.codeplex.com/ which presumably doesn't include the bug fixed by this workitem:
http://dotnetzip.codeplex.com/workitem/12727
But, according to that theory, doing:
GC.Collect();
GC.WaitForPendingFinalizers();
should have provided a work around, which it didn't.
Next I tried using handle, which didn't show any other activity on the failing StorageFile.
So for now, I'm still stumped.

Featurereceiver sharepoint 2010 xdocument 503

My issue seems to be related to permissions, but I am not sure how to solve it.
In the FeatureActivated event of one of my features I am calling out to a class I created for managing webconfig entries using the SPWebConfigModification class. The class reads up an xml file that I have added to the mapped Layouts folder in the project.
When I deploy the .wsp to my Sharepoint server everything gets installed fine, but when the FeatureActivated event runs it throws a 503 error when attempting to access the xml file.I am deploying the .wsp remotely using a powershell script and I have the powershell, the iisapp pool and the owstimer.exe all using the same domain administrative user.
I assumed the issue was that the FeatureActivated event code was being run within the scope of the OWSTIMER.exe so changed the logon of the service to a domain user that has administrative access to the server to see if that would solve the problem, but no matter what I am getting the 503.
I have traced out the URL to the xml file and pasted that into IE and I am getting back the xml without issue from the server once its copied.
Can anyone give me any idea where to look to figure out why the FeatureActivated event code can't seem to get to the XML file on the server?
Below is the code in my class that is being called from the FeatureActivated event to read the xml.
_contentservice = ContentService;
WriteTraceMessage("Getting SPFeatureProperties", TraceSeverity.Medium, 5);
_siteurl = properties.Definition.Properties["SiteUrl"].Value;
_foldername = properties.Definition.Properties["FolderName"].Value;
_filename = properties.Definition.Properties["FileName"].Value;
_sitepath = properties.Definition.Properties["SitePath"].Value;
WriteTraceMessage("Loading xml from layouts for configuration keys", TraceSeverity.Medium, 6);
xdoc = new XDocument();
XmlUrlResolver resolver = new XmlUrlResolver();
XmlReaderSettings settings = new XmlReaderSettings();
StringBuilder sb = new StringBuilder();
sb.Append(_siteurl).Append("_layouts").Append("/").Append(_foldername).Append("/").Append(_filename);
WriteTraceMessage("Path to XML: " + sb.ToString(), TraceSeverity.Medium, 7);
WriteTraceMessage("Credentials for xml reader: " + CredentialCache.DefaultCredentials.ToString(), TraceSeverity.Medium, 8);
resolver.Credentials = CredentialCache.DefaultCredentials; //this the issue might be here
settings.XmlResolver = resolver;
xdoc = XDocument.Load(XmlReader.Create(sb.ToString(), settings));
I finally punted on this issue because I discovered that while adding the -Force switch to the Enable-SPFeature command did use a different process to activate the feature when adding a solution it did not work when updating a solution. Ultimately I just changed my XDocument.Load() to use a TextReader instead of a URI. The xml file will always be available when deploying the WSP because it is part of the package so there is no reason to use IIS and a webrequest to load up the xml.

an error 3013 thrown when writing a file Adobe AIR

I'm trying to write/create a JSON file from a AIR app, I'm trying not so show a 'Save as' dialogue box.
Here's the code I'm using:
var fileDetails:Object = CreativeMakerJSX.getFileDetails();
var fileName:String = String(fileDetails.data.filename);
var path:String = String(fileDetails.data.path);
var f:File = File.userDirectory.resolvePath( path );
var stream:FileStream = new FileStream();
stream.open(f, FileMode.WRITE );
stream.writeUTFBytes( jsonToExport );
stream.close();
The problem I'm having is that I get a 'Error 3013. File or directory in use'. The directory/path is gathered from a Creative Suite Extension I'm building, this path is the same as the FLA being developed in CS that the Extension is being used with.
So I'm not sure if the problem is that there are already files in the directory I'm writing the JSON file to?
Do I need to add a timer in order to close the stream after a slight delay, giving some time to writing the file?
Can you set up some trace() commands? I would need to know what the values of the String variables are, and the f.url.
Can you read from the file that you are trying to write to, or does nothing work?
Where is CreativeMakerJSX.getFileDetails() coming from? Is it giving you data about a file that is in use?
And from Googling around, this seems like it may be a bug. Try setting up a listener for when you are finished, if you have had the file open previously.
I re-wrote how the file was written, no longer running into this issue.