I have an application from a third party that writes to the Oracle database. One component of the application returns no data given particular parameters, while the other component of the application does return data with those same parameters. Nobody has the source code to this application, but it can be seen that the database has the proper information in it.
The misbehaving component gets ORA-01403 returned from the oracle database server, which means no data found, but can be related to a syntax error, as seen by a packet sniffer I installed.
I want to see the differences in the queries that the different components of the application actually generate.
Would also like to run these queries on the command line or in some other database viewer to see what gets returned.
How can I monitor the database with a trace that actually shows the queries being made? I would also like to isolate these by IP address or source.
Using Oracle 10g Enterprise
I found this worked well for an AWS Oracle RDS instance. I ran the tcpdump from the linux instances connecting to the db...
tcpdump tcp port 1521 -s 0 -l -w - | strings | perl -e '
while(<>) { chomp; next if /^[^ ]+[ ]*$/;
if(/^(SELECT|UPDATE|DELETE|INSERT|SET|COMMIT|ROLLBACK|CREATE|DROP|ALTER)/i) {
if (defined $q) { print "$q\n"; }
$q=$_;
} else {
$_ =~ s/^[ \t]+//; $q.=" $_";
}
}'
Hope that helps someone else.
IIRC, TOAD will do what you want.
Additionally, there is a free trial - http://www.quest.com/toad-for-oracle/software-downloads.aspx
There are other interesting downloads (search oracle free toad) but I can't be sure of their legitimacy.
If your client connects directly to the database without any middle-tier layer, then you have two pretty much straightforward options.
First of all, figure out required session's ID using v$session view, and then either find your query in v$sql/v$sql_text by its hash value (you can check description of each in the docs), or enable session-level sql trace (1) (2) and get your queries in a plain text trace file.
If you have a middle-tier then things get slightly more complicated but only in terms of figuring out the session you need to trace. You can always enable system-wide tracing though.
A little late to the party but I ran into this problem and didn't want to installe something on the database server for a one off use, I wound up using wireshark; the queries were sent in plaintext and perfectly readable.
Related
I'm using redis to store the userId as a key and the socketId as the value. What's more important is that the userId doesn't change, but the socketId constantly changes. So I want to edit the socketId value inside redis, but I'm not sure what node_redis command to use. I'm currently just editing by using .set(userId, mostRecentSocketId).
In addition, I haven't found a good node_redis API anywhere with a complete list of commands. I briefly looked at the redis-commands package, but it still doesn't seem to have a full list of complete commands.
Any help is appreciated; thanks in advance :)
The full list of Redis commands can be found at https://redis.io/commands. After finding a proper command it wouldn't be hard to find how is it proxied in binding ("api") you use.
Upd. To make it clear: you have Redis Server, its commands are listed at the doc I provided. Then you have redis-commands - it's a library for working with redis (I called it a "binding"). My point was that redis-commands may have not all the commands that redis-server can handle, and also the names of some commands can differ a bit. Some other bindings can offer slightly different set of commands. So it's better to examine the list of commands that Redis Server handles, and then select a binding that allowes calling that command (I guess all the bindings have set method)
Is it just a plain print to screen? if it is why not simply use simple console print command?
I've looked it up and there isn't much information about it. Even in the official PostgreSQL guide there isn't information about it other than the fact it exists.
Is it preferred to use plpy module because that way the information to be printed won't be logged in PostgreSQL log file?
The PL/Python plpy.notice(msg) method and its cousins, plpy.debug(msg), plpy.log(msg), plpy.info(msg), plpy.warning(msg), plpy.error(msg), and plpy.fatal(msg) are used to generate messages using PostgreSQL's logging capabilities. The error and fatal variants also raise an exception which can be used to abort the current SQL transaction. plpy.notice(msg) is equivalent to the PL/PgSQL command RAISE NOTICE msg.
According to the PostgreSQL 9.4 documentation http://www.postgresql.org/docs/9.4/static/plpython-util.html, the destination of log messages at various levels can be controlled via database configuration variables. For example, you can specify that you only want messages of at least WARNING level to be dispatched to the client, but anything from NOTICE and above to be logged to the server log. This has been the case at least back to PostgreSQL 8.0.
Probably has been asked before, but i'm looking for a utility, which can
Identify a particular session and record all activity.
Able to identify the sql that was executed under that session.
Identify any stored procedures/functions/packages that were executed.
And able to show what was passed as parameters into the procs/funcs.
I'm looking for a IDE thats lightweight, fast, available and won't take 2 day's to install, i.e something I can get down, install and use in the next 1 hour.
Bob.
if you have license for Oracle Diagnostic/Tuning Packs, you may use Oracle Active Session History feature ASH
The easiest way I can think of to do this is probably already installed in your database - it's the DBMS_MONITOR package, which writes trace files to the location identified by user_dump_dest. As such, you'd need help from someone with access to the database server to access the trace files.
But once, you've identified the SID and SERIAL# of the session you want to trace, you can just call:
EXEC dbms_monitor.session_trace_enable (:sid, :serial#, FALSE, TRUE);
To capture all the SQL statements being run, including the values passed in as binds.
I want to measure the performance and scalability of my DB application. I am looking for a tool that would allow me to run many SQL statements against my DB, taking the DB and script (SQL) file as arguments (+necessary details, e.g. host name, port, login...).
Ideally it should let me control parameters such as number of simulated clients, duration of test, randomize variables or select from a list (e.g. SELECT FROM ... WHERE value = #var, where var is read from command line or randomized per execution). I would like to test results to be saved as CSV or XML file that I can analyze and plot them. And of course in terms of pricing I prefer "free" or "demo" :-)
Surprisingly (for me at least) while there are dozens of such tools for web application load testing, I couldn't find any for DB testing!? The ones I did see, such as pgbench, use a built-in DB based on some TPC scenario, so they help test the DBMS configuration and H/W but I cannot test MY DB! Any suggestions?
Specifically I use Postgres 8.3 on Linux, though I could use any DB-generic tool that meets these requirements. The H/W has 32GB of RAM while the size of the main tables and indexes is ~120GB. Hence there can be a 1:10 response time ratio between cold vs warm cache runs (I/O vs RAM). Realistically I expect requests to be spread evenly, so it's important for me to test queries against different pieces of the DB.
Feel free to also contact me via email.
Thanks!
-- Shaul Dar (info#shauldar.com)
JMeter from Apache can handle different server types. I use it for load tests against web applications, others in the team use it for DB calls. It can be configured in many ways to get the load you need. It can be run in console mode and even be clustered using different clients to minimize client overhead ( and so falsifying the results).
It's a java application and a bit complex at first sight. But still we love it. :-)
k6.io can stress test a few relational databases with the xk6-sql extension.
For reference, a test script could be something like:
import sql from 'k6/x/sql';
const db = sql.open("sqlite3", "./test.db");
export function setup() {
db.exec(`CREATE TABLE IF NOT EXISTS keyvalues (
id integer PRIMARY KEY AUTOINCREMENT,
key varchar NOT NULL,
value varchar);`);
}
export function teardown() {
db.close();
}
export default function () {
db.exec("INSERT INTO keyvalues (key, value) VALUES('plugin-name', 'k6-plugin-sql');");
let results = sql.query(db, "SELECT * FROM keyvalues;");
for (const row of results) {
console.log(`key: ${row.key}, value: ${row.value}`);
}
}
Read more on this short tutorial.
The SQL Load Generator is another such tool:
http://sqlloadgenerator.codeplex.com/
I like it, but it doesn't yet have the option to save test setup.
We never really found an adequate solution for stress testing our mainframe DB2 database so we ended up rolling our own. It actually just consists of a bank of 30 PCs running Linux with DB2 Connect installed.
29 of the boxes run a script which simply wait for a starter file to appear on an NFS mount then start executing fixed queries based on the data. The fact that these queries (and the data in the database) are fixed means we can easily compare against previous successful runs.
The 30th box runs two scripts in succession (the second is the same as all the other boxes). The first empties then populates the database tables with our known data and then creates the starter file to allow all the other machines (and itself) to continue.
This is all done with bash and DB2 Connect so is fairly easily maintainable (and free).
We also have another variant to do random queries based on analysis of production information collected over many months. It's harder to check the output against a known successful baseline but, in that circumstance, we're only looking for functional and performance problems (so we check for errors and queries that take too long).
We're currently examining whether we can consolidate all those physical servers into virtual machines, on both the mainframe running zLinux (which will use the shared-memory HyperSockets for TCP/IP, basically removing the network delays) and Intel platforms with VMWare, to free up some of that hardware.
It's an option you should examine if you don't mind a little bit of work up front since it gives you a great deal of control down the track.
Did you check Bristlecone an open source tool from Continuent? I don't use it, but it works for Postgres and seems to be able to do the things that your request. (sorry as a new user, I cannot give you the direct link to the tool page, but Google will get you there ;o])
I got the following SQLException: "invalid options in all7"
Upon googling the error message, the ONLY hits I saw were Oracle error lists which pinpointed the error at "ORA-17432: invalid options in all7". Sadly, googling for the error # brought up only combined lists with no explanation for the error, aside from this page that said "A TTC Error Message" as the entire explanation.
The error happens when a Java program retrieves data from a prepared statement call executing a procedure that returns a fairly large, but not unreasonable, # of rows via a cursor.
I can add the stack trace from the exception as well as condensed code, but I assume that's not terribly relevant to figuring out what "ORA-17432: invalid options in all7" means.
Context:
Error seemed to have appeared when the Java program was migrated from Oracle 9 OCI to Oracle 10.2 thin client. The procedure, when run directly against database (via Toad) works perfectly fine and returns the correct cursor with correct data and no errors.
This seems to be something data specific (result set size may be?) since running that same exact code against a different currency as a procedure parameter (which returns much smaller resultset) works 100% fine.
This is almost certainly not something you're going to have control of. It looks like a problem with the way your thin driver is using the two-task common (TTC) protocol. One thing to note is that this sort of thing can be very sensitive to the version of the driver you are using. Make absolutely certain that you have the latest version of the JDBC driver for the combination of the version of Java you're using and the version of Oracle on the server.
Akohchi - you were in the right area though not quite correct. The explanation obtained via Oracle Support call was that this version of Java (1.3) was not compatible with new Oracle. Java 1.4 fixed the issue.