oracle udump trc file size issue - sql

My project uses oracle db hosted on unix machine. The issue is that the trace files generated at udump location have loggers from my custom code as well. (custom code loggers are from java callouts which are loadjava on the db).
Now every time i use that module the udump folder is flooded by 3 new trc files which have default oracle logs as well as my custom code logs.
I want to disable the logs that are generated from my code.
Till now i have tried to write a custom log4j.properties and loadjava it n use it for my code, in that prop file i have used file and console handlers pointed to custom location on unix machine other than udump location. But still the custom logs are coming only in udump location and there are no logs at the new location which i tried from the prop file.
I tried disabling the logging.trace=false in the logging.properties file of the oracle jvm.
I have checked a few sql queries which can disable session trace. It identifies around 70 sessions. I just want to disable java logs, i would like to know if its possible to find the session that my java logs use and disable the trace for it.
I am using oracle 9i version and java 1.4 version.
Need to disable the custom logs from coming to the udump location. Also the solution should be implementable generally as my application goes multiple environments like test env, stage env, prod env..
Any hint would be very helpful.

Related

"java.lang.NullPointerException" error in JMeter Non-Gui mode

When I try to execute JMeter (Version 5.1.1 & 5.2.1) recorded script in Non-Gui mode using distributed testing, It is displaying below shown "java.lang.NullPointerException" error while generating HTML report. Also JTL report is creating an empty notepad file without any data.
.
Note:- This error occurrs only when I place CSV Data Set Config - Config Element in the test plan. When I remove/disable it, HTML and JTL reports get generated without any error. Also I can't skip this CSV Data Set Config plugin on test execution.
.
Please let me know, If there is any other solution to overcome this issue.
Thanks in advance.
You're highlightling not the cause but rather a consequence, you should be rather paying attention to the Summariser output which states summary = 0
which basically means that no Samplers were executed so your test script execution on slaves failed somewhere somehow. First of all I would recommend checking jmeter.log on the master and jmeter-server.log files on the remote machines, most probably you will be able to figure out the root cause from there.
Quick checklist:
Make sure to use the same Java version on the master and the slaves
Make sure to use the same JMeter version (better the latest one) on the master and the slaves
If your test relies on JMeter Plugins - you need to install all the plugins used in the test onto all the slaves
If you define some properties in user.properties file you need to do the same on all the remote machines (or alternatively pass them via -G command-line argument)
If you're using external 3rd-party files (CSV files, files to be uploaded, etc.) - you will need to manually copy them to the slave machines
Double check Remote hosts and RMI configuration to ensure that the slaves can communicate with the master in order to send Sample Results back to it. Also make sure that the relevant ports are open in Windows Firewall
More information: How to Perform Distributed Testing in JMeter
The issue seems like a with csv file path.Make sure you are providing the correct path in csv-file-config.Normally this happens when it is not able to read the data from the location.

Pdf2htmlEX common error "Cannot load font"

Running the pdf2htmlEX.exe Windows binary from the command prompt works as expected. While, running the pdf2htmlEX Windows binary in a wrapper (.Net in my case) I received an error like the one below.
__tmp_font1.ttf is not in a known format (or uses features of that format fontfo
rge does not support, or is so badly corrupted as to be unreadable)
Cannot load font C:\Users\admin\AppData\Local\Temp\pdf2htmlEX-5RLDCX/__tmp_fo
nt1.ttf
This is a pretty ambiguous error, and appears to be frequent among users when using the windows binary version.
Apparently Lu Wang wasn't able to offer a solution for Windows users, as all posts related are marked 'insufficient info'. Unfortunately, the pdf2htmlEX project is also archived, and no new comments can be added, so I'm adding this information here in the hope that this may help someone else in the future.
In my scenario, the library is called via an ASP.Net wrapper method using System.Diagnostics.Process to convert uploaded files into HTML versions. The Pdf2htmlEX library would work without issue from the Command Prompt, and for some reason, would also work perfectly in my development environment, but not in a production environment (Both of which are Windows Server 2012R2).
My first assumption, and correctly so, was that there was a permissions issue. Pdf2htmlEX uses FontForge internally to handle fonts, and one or both use the Windows Temp directory by default to store resource files used in the creation of the HTML and/or other files. And, I 'believe' although not confirmed, that it also may use the active user's %USERPROFILE%\AppData\Local\Temp folder...
When running test commands from Command Prompt, you are operating under your user context, and everything your user can do, Pdf2htmlEX can do. So everything works as expected.
In a server environment, the process is operating under the ApplicationPoolIdentity, a special IIS user type with limited permissions. Here it failed for me. While, I'd see folders and files created in the Windows Temp folder, they couldn't be opened by Pdf2HtmlEX to create the end files elsewhere.
Solution: (there may be other solutions for your individual case)
In my case, adding a new system user, adding that user to the Users group, and then setting the IIS worker process to that account resolved the issue. The reason I believe, is that the Users group has read/write access to the Windows Temp directory, and potentially other required areas of the system required for Pdf2htmlEX to complete.

What would cause SSIS to ignore Package Configuration Connections?

I have a very simple SSIS Package that has 2 connections defined in the Connection Manager section. An MS Access Data Source and an MS SQL Data Source Destination. All this package does is Truncate a table in the SQL Destination and Imports data from MS Access into the SQL table. This works as expected during Development within VS2013.
Now, I also have enabled Package Configurations for the package and have a couple of XML Configuration files (1 for each Connection) in a folder on the root of the C: drive. The Configuration file connections differ based on the server where they reside, but the folder structure exists on both servers so the package can execute against the server from which it is run.
I've checked the box to enable Package Configurations and deploy the package to 2 different Servers. 1 for Development and the other for QA. When I execute the package via the SSMS Integration package execution on my Development Server, the package utilizes the Development table. But when I execute the same package on my QA environment, it also utilizes the Development table.
Since the Development connection is the one that is embedded in the package via the Connection Manager, it appears (presumably anyway) that the package is using the embedded connection and ignoring the configuration files.
I have alternatively explicitly added the path to the Configuration file within the Execute Package Utility in the Configurations section to see if it made any difference but the results are the same. The configuration file is not acknowledged. So it again appears that the package is using the embedded connections that defined in the Configuration Managers.
I suppose I "may" be able to remove the Connections from the package in the Connection Managers section and turn off validations during Design time and then deploy again in effort of forcing the package to use the Config files but that doesn't seem like the way to go and a hack at best; provided that it would even work.
Not that I think it should make a difference but to provide more detail, here is a bit more concerning my Server Configuration:
Development - SQL 2014 [ServerName]
Quality Assurance - SQL 2014 [ServerName][InstanceName]
I don't recall ever having this issue before, hence my reason for posting.
Ok, since I am working against a dead line; I was hoping to acquire an answer sooner than later. But since that wasn't the case and because I've seen variations of this question before without a definitive answer (at least to satisfy this scenario) I performed some tests and am posting this for others who may also have need of this information.
The Following Conditions will ignore the use of Configuration Files even if Package Configurations are enabled in an SSIS Package. These findings are based on actual tests and affirmed to be true for SQL 2014 although prior versions may also be applicable.
Disclaimer: These tests focused on the Configuration Files as they pertained to actual Server Connections. (E.g. Connection Strings) and not any other variables although it’s conceivable that any other values within the Configuration file would also be affected.
Execution of the Package from within SSMS while connected to the Integrated Services Component and selecting to Run Package. The noted behavior is that whatever Connection value was acquired prior to deployment to the Server is the one that will be used; irrespective of the Configuration Files
Note: This holds true even if configurations are added in the Configurations section prior to execution. Although there is mention that the configurations are not imported and they cannot be edited; the fact is they were neither used during the testing.
If an SQL job is of type SQL Server Integration Services Package and no Configuration File references are actually added to the Configurations tab, the values the job will execute under whatever values were used during the last build within BIDS prior to deployment (Embedded Values)
If multiple configuration files are used by the package but some are omitted in the Configurations tab of the job; the job will use those Configuration Files designated but will default to the last values used in Development (Embedded Values) for those which are not present in the context of the job
Some of these behaviors are not very obvious and I'd imagine it could be a frustrating puzzle when someone expecting to follow the rules of most online tutorials for using Package Configuration files; would have the expected more straight forward results.
I know it was a time consuming task of testing to identify the root cause for me and although I'm not an expert; I'm certainly far from a novice with SSIS.
At any rate, I hope this helps someone else from hours of work and investigations.

Python and WSGI - Where is the default output folder? (CentOS/Apache)

I am running Python under WSGI on an Apache server using CentOS 6. The python script uses a wrapper for the NCAR library called PyNGL. The purpose of this library is to generate graphics from supplied data.
I am attempting to use my python script as a web service by hooking it up to web.py, but it has an entry point for direct execution as well.
Here is the weird thing:
When I run the script directly it works as intended and produces an output image in the directory of the script. However, when I attempt to invoke it through the web.py controller (with the exact same parameters) it fails.
My apache error log contains this:
warning:GKS:GCLRWK: -- cairo driver error: error opening output file
I'm guessing that this is probably a permissions problem, but I haven't the slightest idea where its trying to output.
Edit: I think I have confirmed that it is indeed a permissions error.
I attempted to create file using relative paths and got a similar error:
<type 'exceptions.IOError'> at /plot
[Errno 13] Permission denied: 'Output.txt'
This error refers to this line here:
with open("Output.txt", "w") as text_file:
text_file.write(str(self.__dict__))
Now of course I can specify an absolute path for that text file, but not for the graphical output from PyNGL. Is there a way to determine where it is trying to output, or to change the default output directory?
Usually your application would be running with the current working directory as '/'. The Apache user will not be able to write to that directory.
In any web application you should in general never rely on it being run in a specific directory as different web servers behave differently as to what the current working directory would be. If you assume it always runs in a specific directory, your application would be inherently unportable. Changing the working directory of an application to get around this is also in general bad practice because in a hosting mechanism that allows multiple application to run in the same process, they would all interfere with each other if they each tried to set their own working directory.
What you should do is always use absolute paths when reading and write files and not use relative paths. Why do you say you can't use absolute paths?
Also be aware that your application will run as a special user which would not have access to directories to create files. You would therefore need to open up access to the Apache user. Best practice would be to limit though what the Apache user can write to.
Now since you are using mod_wsgi, one viable option is to make sure you are using mod_wsgi daemon mode and when using the WSGIDaemonProcess directive set the 'home' option to override the current working directory for the single WSGI application delegated to that process. You can also set 'user' and 'group' options to have the process run as a different user that does have access to the directory.

Accessing a resource file from a filesystem plugin on SymbianOS

I cannot use the Resource File API from within a file system plugin due to a PlatSec issue:
*PlatSec* ERROR - Capability check failed - Can't load filesystemplugin.PXT because it links to bafl.dll which has the following capabilities missing: TCB
My understanding of the issue is that:
File system plugins are dlls which are executed within the context of the file system process. Therefore all file system plugins must have the TCB PlatSec privilege which in turn means they cannot link against a dll that is not in the TCB.
Is there a way around this (without resorting to a text file or an intermediate server)? I suspect not - but it would be good to get a definitive answer.
The Symbian file server has the following capabilities:
TCB ProtServ DiskAdmin AllFiles PowerMgmt CommDD
So any DLL being loaded into the file server process must have at least these capabilities. There is no way around this, short of writing a new proxy process as you allude to.
However, there is a more fundamental reason why you shouldn't be using bafl.dll from within a fileserver plugin: this DLL provides utility functions which interface to the file servers client API. Attempting to use it from within the filer server will not work; at best, it will lead to the file server deadlocking as it attempts to connect to itself.
I'd suggest rethinking that you're trying to do, and investigating an internal file-server API to achieve it instead.
Using RFs/RFile/RDir APIs from within a file server plugin is not safe and can potentially lead to deadlock if you're not very careful.
Symbian 9.5 will introduce new APIs (RFilePlugin, RFsPlugin and RDirPlugin) which should be used instead.
Theres a proper mechanism for communicating with plugins, RPlugin.
Do not use RFile. I'm not even sure that it would work as the path is checked in Initialise of RFile functions which is called before the plugin stack.
Tell us what kind of data you are storing in the resource file.
Things that usually go into resource files have no place in a file server plugin, even that means hardcoding a few values.
Technically, you can send data to a file server plugin using RFile.Write() but that's not a great solution (intercept RFile.Open("invalid file name that only your plugin understands") in the plugin).
EDIT: Someone indicated that using an invalid file name will not let you send data to the plugin. hey, I didn't like that solution either. for the sake of completness, I should clarify. make up a filename that looks OK enough to go through to your plugin. like using a drive letter that doesn't have a real drive attached to it (but will still be considered correct by filename-parsing code).
Writing code to parse the resource file binary in the plugin, while theoratically possible, isn't a great solution either.