How to configure timestamp in Weblogic 12c Diagnostic log file name - weblogic

I'm working on Weblogic 12.2.1 Version.
I have to save my weblogic domain diagnostic log file names in the following format: "myserver_diagnostic-yyyy-MM-dd'T'HH:mm.log".
If date is "4th July 2018, 02:56PM" then the file should be as: "myserver_diagnostic-2018-07-04T14:56.log".
Current configuration in the Weblogic Enterprise Manager is:
${domain.home}/servers/${weblogic.Name}/logs/${weblogic.Name}-diagnostic.log Which resulting the file name as: myserver-diagnostic.log
How to provide timestamp in the above configuration?

For WLS Diagnostic log files, unfortunately, you don't have the option to add timestamp as part of the filename. Diagnostic log files are handled by Oracle Diagnostic Logging (ODL) and this uses numbers when rotating files. ODL does allow rotating on time-based conditions, but the filename will always use a number, no matter what type of condition you select. For more details, see section "Configuring Log File Rotation":
https://docs.oracle.com/en/middleware/fusion-middleware/12.2.1.3/asadm/managing-log-files-and-diagnostic-data.html#GUID-391C6ADF-1D5D-46B4-A7F6-C9CF855F8AF6
Now, if you wanted to add the timestamp to WLS log files (not the diagnostic log files, but the main WLS log files), here is an excerpt from the documentation explaining how WLS can use timestamp to name the log files, when log file rotation is enabled:
If you enter the following value in the File Name field:
myserver_%yyyy%%MM%%dd%%hh%%mm%.log, the server's log file will be
named myserver_yyyy_MM_dd_hh_mm.log.
When the server instance rotates the log file, the rotated file name
contains the date stamp. For example, if the server instance rotates
its local log file on 4 March, 2005 at 10:15 AM, the log file that
contains the old log messages will be named
myserver_2005_03_04_10_15.lognnnnn. (The current, in-use server log
file retains the name myserver_yyyy_MM_dd_hh_mm.log.)
Here you can find details on how to set the log filename from WLS console (for version 14.1.1.0):
https://docs.oracle.com/en/middleware/standalone/weblogic-server/14.1.1.0/wlach/taskhelp/logging/RotateLogFiles.html
And here is the same documentation page, but for version 12.2.1.3:
https://docs.oracle.com/middleware/12213/wls/WLACH/taskhelp/logging/RotateLogFiles.html

Related

Pentaho DI default logging database for kitchen.sh

I am using PDI 8.3 with repo database in another server.
In my expectation, if I do not define any log connections in the job properties, the job will not send any logs to the repo database.
However, when I run a job with kitchen.sh, it defines new database connection "live_logging_info" that points to "localhost:5432". Because PDI repo database is in another server, the job fails.
May I know how to define the default DB log connection? Thank you.
Under PDI 8.3 there should be a folder called simple-jndi. Within that folder there should be a file called jdbc.properties. In that file near the bottom there are settings for live_logging_info. By default it points to localhost:5432 but you can set it to any location. Or it can be another type of database (MySQL,MSSQL, etc).
The settings that are available by default are:
live_logging_info/type=javax.sql.DataSource
live_logging_info/driver=org.postgresql.Driver
live_logging_info/url=jdbc:postgresql://localhost:5432/hibernate?searchpath=pentaho_dilogs
live_logging_info/user=hibuser
live_logging_info/password=password

Configure TDWC v8.6 FP3 or v9.x for reporting using an Oracle database

Is there a concise set of steps available to allow Tivoli Dynamic Workload Console (TDWC) v8.6 FP3 or v9.x to successfully create reports when the Tivoli Workload Scheduler (TWS) master is using Oracle as the database vendor?
TDWC v8.6 FP3 and v9.x do not include the JDBC libraries that allow a Type 4 connection to an Oracle database. Here is a typical message returned when attempting to connect to an engine in DWC when the "Enable reporting" box is checked, but the configuration work has not been done:
Manage Engines AWSUI0803W Test connection to "ENGINENAME" : engine successful, database failed. AWSUI0346E Database failure. Check the database is available and the connection parameters are correct and retry:
database user: TWS_user, database JDBC URL:jdbc:oracle:thin:#//1.2.3.4:1521/DBNAME If the problem persists contact the Tivoli Workload Scheduler administrator. The database internal message is: No suitable driver found for jdbc:oracle:thin#//1.2.3.4:1621/DBNAME
The TWS online documentation includes the procedure to accomplish the needed configuration. There are, however, a couple of external links that must be used to make the modifications within WebSphere Application Server (WAS). The full details are described below:
Actions taken on Tivoli Workload Scheduler engine:
For Oracle, the IT administrator, or the Tivoli Workload Scheduler IT administrator, or both working together, perform the following steps:
Use the TWS Oracle user specified during the master domain manager installation or perform the following steps to create a new user:
a. Create a database user authorized to access the database and specify a password.
b. Launch the following script: /TWS/dbtools/Oracle/scripts/dbgrant.bat/.sh
where the variables are as follows:
The Tivoli Workload Automation instance directory
The ID of the user created in step 1.a, who is going to be granted the access to the reports
The name of the database, as created when the master domain manager was installed
The user ID and password of the database schema owner.
Define a valid connection string to the database:
a. Ensure that the following property is set in the TWSConfig.properties file to point to the Oracle JDBC URL: com.ibm.tws.webui.oracleJdbcURL
FP3: /eWAS/profiles/TIPProfile/properties/TWSConfig.properties 9.x: /WAS/TWSProfile/properties/TWSConfig.properties
For example:
com.ibm.tws.webui.oracleJdbcURL= jdbc:oracle:thin:#//9.132.235.7:1521/orcl
b. Restart the WebSphere Application Server.
Actions taken on the Dynamic Workload Console:
Download the JDBC drivers required by your Oracle server version.
Copy the JDBC drivers into a directory that is accessible by the WebSphere Application Server used by your Dynamic Workload Console.
Create a shared library on WebSphere Application Server specifying the path and filename of the JDBC drivers you have copied, as documented below:
a. Login to the WebSphere Admin Console for TDWC as the WebSphere Administrative user.
v8.6 FP3: The default https admin port is 31124.
v9.x: The default https admin port is 16316.
The URL will be similar to this: https://(hostname):16316/ibm/console
b. Select Environment > Shared libraries in the console navigation tree.
c. Select the following scope from the dropdown list:
v8.6 FP3: Node=TIPNode01, Server=server1
v9.x: Node=JazzSMNode01, Server=server1
...and select [New]
d. Specify a new name such as oraclelibs
e. Specify the path to the directory that holds the Oracle JDBC drivers in the Classpath field.
*Field Detail: If a path in the list is a file, the product searches the contents of that Java archive (JAR) or compressed .zip file. If a path in the list is a directory, then the product searches the contents of JAR and compressed files in that directory. Press Enter to separate class path entries.
f. Select [Apply]
*NOTE: The file that is updated by the above step is: libraries.xml
v8.6 FP3: /eWAS/profiles/TIPProfile/config/cells/TIPCell/nodes/TIPNode/servers/server1/libraries.xml
v9.x: cells/JazzSMNode01Cell/nodes/JazzSMNode01/servers/server1/libraries.xml
Associate the isc Enterprise Application to this shared library:
a. Still in the WebSphere Admin Console, in the console navigation tree to access the Shared library references page: Select Applications > Application Types > WebSphere enterprise applications > isc > Shared library references
b. Check the box for the application named isc. This should be the first box and the only box under the section in the table with the second column named "Application".
c. Select the [Reference shared libraries] button.
d. In the "Available" box select the name of the new shared library that you created in step 3. For example: oraclelibs. Then select the Add arrow button to move the shared library name from the Available to Selected box.
e. Select [Ok]
f. Select [Ok] on the "Shared library references" page.
g. Select the blue word Save at the top of the page in the Messages box.
*NOTE: The files that are noted as being updated in this scenario are:
v8.6 FP3: cells/TIPCell/nodes/TIPNode/serverindex.xml cells/TIPCell/applications/isc.ear/deltas/isc/delta-<#> cells/TIPCell/applications/isc.ear/deployments/isc/deployment.xml
v9.x: cells/JazzSMNode01Cell/nodes/JazzSMNode01/serverindex.xml cells/JazzSMNode01Cell/applications/isc.ear/deltas/isc/delta-<#> cells/JazzSMNode01Cell/applications/isc.ear/deployments/isc/deployment.xml
**NOTE: It is the file deployment.xml that has reference to the actual shared library name that was created in step 3.
***NOTE: Sample entry:
h. Restart the WebSphere Application Server.
Log on to the Dynamic Workload Console.
In Dashboard Application Services Hub navigation bar, select System Configuration > Manage Engines. The Manage Engines panels opens.
Select the engine you defined or create another engine. The Engine Connection properties panel is displayed.
In Database Configuration for Reporting, perform the following:
a. Check Enable Reporting to enable the engine connection you selected to run reports.
b. In Database User ID and Password, specify the database user and password that you authorized to access reports.

RavenDB database restore: Operation failed: Cannot access file, the file is locked or in use

A year ago, when we started using RavenDB, we quickly hit this error in production:
Operation failed: Cannot access file, the file is locked or in use
We found out through some forums, that we could get rid of this by not running RavenDB as an IIS website, but as a service. We did, and never saw the error again.
Until now: I'm setting up a new environment and thought, I'd give it a whirl with running RavenDB off IIS, but the error quickly reappeared.
Facts and what I've tried:
running off v.30155 on a commercial license
using a custom AppPool (called RavenApplicationPool)
have data-folder separate from IIS dir (through Raven/WorkingDir AppSetting)
RavenApplicationPool user has "Full control" permissions on both IIS folder and data-folder
Windows Authentication installed and enabled for website
overlapped recycle disabled for app-pool
webdav-publishing not installed (as requested in setup docs)
hitting my keyboard (no luck)
The error occurs at random when site has been running for a few minutes. It also occurs when trying to restore a backup using the following command:
C:\RavenDBExecutables\Server\raven.Server.exe --restore-source=c:\SOME_PATH\my-db-backup.raven --restore-database-name=my-db --restore-database=http://localhost:8080
(the RavenDBExecutables folder is merely the zip of binaries/executables et.c. - it does not overlap with the IIS website folder)
When running the restore, these folders are created:
IndexDefinitions (contains a lot of files ending in .index)
Indexes (lot of folders with integer names)
logs (empty)
system (empty)
temp (empty)
root folder (my-db) is empty, i.e. no Data file, .resource.database or raven-data.ico as with working db's
After failed import, if I visit http://localhost:8080/docs/Raven/Restore/Status, I get
*a lot of lines like "Copying PATH_TO_INDEX_FILE", *
"Esent Restore: Failure! Could not restore database!",
"Microsoft.Isam.Esent.Interop.EsentFileAccessDeniedException: Cannot access file, the file is locked or in use\r\n at
Microsoft.Isam.Esent.Interop.Api.Check(Int32 err)\r\n at
Microsoft.Isam.Esent.Interop.Api.JetRestoreInstance(JET_INSTANCE
instance, String source, String destination, JET_PFNSTATUS
statusCallback)\r\n at
Raven.Database.Storage.Esent.Backup.RestoreOperation.Execute()",
"Unable to restore database Cannot access file, the file is locked or in use"
What can it be then?
The reason this happens is that IIS by default does overlapped recycle.
That means that both versions of the app are running (and both are trying to use the same resources).
With overlapped recycled set to false, that shouldn't happen.
We have seen similar errors when users have set two databases to the same path

How to capture Firebird SQL queries?

Is there any way to capture SQL queries transmitted by old application
created in Delphi/C++Builder + Firebird?
I don't have source code of that client app or access to (remote) database server.
Firebird 2.5 added the trace API which can be used to track prepare and execution of statements and a number of other things. The tools included in Firebird for use of the trace API are rather basic, but it might well be sufficient for your needs. Be aware that by default the trace API limits the size of statements captured and logged, and it might take some time to tweak the trace configuration to get all information you need.
An example configuration is:
<database mydatabase.fdb>
enabled true
log_statement_prepare true
time_threshold 0
max_sql_length 65536
</database>
This should capture all statement prepares with the full SQL query in the database mydatabase.fdb.
See for more information: Audit and Trace Services in Firebird 2.5.
There are several vendors who provide tools that utilize the trace API (for example FB Tracemanager by Upscene Productions), and as already mentioned in the comments, there is also FBScanner by IBSurgeon which acts as a proxy between the client and a Firebird server and allows you to record the traffic (including statements).
Firebird includes a utility fbtracemgr.exe that can be used for tracing. Here's a sample command line:
cd "C:\Program Files\Firebird\Firebird_3_0"
fbtracemgr -start -service localhost/3050:service_mgr -config c:\temp\fb-trace.config -user sysdba -password <secret> >c:\temp\fb-trace.log
Discussion of parameters:
The -start parameter instructs the tool to start a trace session. There are other parameters, just run fbtracemgr.exe without any arguments to see a list of possible parameters.
The -service parameter tells the tool which service to trace. It is essential that you use the same connection method as the client that you want to monitor.
Let's say you use FlameRobin, in this case you probably have defined a database connection that uses TCP/IP and that connects to localhost and the default TCP port 3050. To match this you have to prefix the service name with "localhost/3050".
If you want to trace an isql.exe session, then you probably let isql.exe connect without using localhost. In this case you have to omit the "localhost/port" prefix and just specify -service service_mgr.
The -config parameter specifies the path where the config file is located that contains the settings to be used for this trace session. Tracing must be configured with settings that define all the details of the trace, including what to trace. The settings can only be specified in the form of a configuration file.
The Firebird engine performs tracing of its own - the System Audit session. For this purpose it includes a trace configuration file located in its program folder. Use this file as an inspiration/template. It contains many commented options explaining purpose and syntax of each option. Filesystem location: C:\Program Files\Firebird\Firebird_3_0\fbtrace.conf.
The -user and -password parameters are necessary only if you want to monitor a TCP/IP connection. If you want to monitor direct connections without authentication (e.g. isql.exe) then you can omit the credentials.
The user you specify for tracing must, obviously, have the rights to "spy" on the connection being traced.
The example uses "sysdba" which has of course all the rights. The user of the connection being traced should also be ok.
The last part of the command redirects output to a trace log file. This is optional, but you'll probably want to do this because can be lots of output. You can open the trace log file in a text editor such as Notepad++ which will alert you when new content is written to the file.
Sorry for necroposting :) but I had the same question. And now we have the trace/audit tool in IBExpert IDE. It can be found Services menu.

Complex sql query on BLK insert

I am using following query to load data from text file into database table
bulk insert Test_Training.dbo.test
from 'D:\SSRS\kasthuri.txt'
I have kasthuri.txt file in specified path. But I am getting error when I execute it.
Msg 4861, Level 16, State 1, Line 2
Cannot bulk load because the file "D:\SSRS\kasthuri.txt" could not be opened. Operating system error code 3(The system cannot find the path specified.).
The error message is because the service running your SQL server instance cannot access the file path. Wherever you place the file, you will need to open up the folder where the file resides to the MSSQLSERVER agent:
**I always struggle with this. For me, allowing MSSQLSERVER full permission to the folder where the input file resides always seems to work.*
Right-click the folder (as an admin on the box), go to properties, security, edit, add...
here is where I always get tripped up. For me, the server service account is "NT Service\MSSQLSERVER" and I can never search for that user. I have to type it in manually and check the name to make sure I typed it in correctly. For you, this may not be the service account used by your server. Check your services list from windows administrative tools to see what account is in "Log On As" for SQL Server (MSSQLSERVER).*