Maximo: Mimic Workflow assignments with an SQL query - sql

I want to write an SQL query that mimics the results in the Maximo Start Center assignments section. The assignments are workflow assignments.
I tried querying the workorder table and specifying the assignedownergroup that the user is in:
select
*
from
workorder
where
status in ('WAPPR','APPR','INPRG')
and assignedownergroup = 'FIRE'
However, the query returns more work orders than what's shown in the Start Center assignments.
How can I write a query to mimic the workflow assignments in the Start Center?

My other answer would work if the portlet you highlighted was a Result Set against WORKORDER, but it is not. The portlet you have highlighted is the Workflow Inbox, which is based on WFASSIGNMENT where assigncode = 'userid'.
A full query that mimics the workflow inbox would look like this, in Oracle SQL:
select
(select 'WO '||wonum||' ('||description||') is waiting for '||wfassignment.description
from workorder
where workorderid = wfassignment.ownerid
and wfassignment.ownertable = 'WORKORDER'
/* Union in other tables */) description,
app
from wfassignment
where assignstatus = 'ACTIVE'
and assigncode = 'JDOE'
I'm not sure where the WO prefix on the assignment description comes from. But since you could add workflow to your own app based on your own object, I would like to think it comes from metadata somewhere instead of code. And the description itself is probably a format string in MAXMESSAGES.
You'll notice the Union in comment in my query, where you would add unioned queries against PR or PM or ASSET or whatever.

The easiest way to get the SQL Maximo is running is:
Go to the Logging application
Select the sql Root Logger and add a "child" Logger of WORKORDER.WORKORDER (that's SERVICE.OBJECT from DB Config) with a Log Level of INFO.
Get ready to open your log file.
Load your start center.
Open your log file.
The SQL issued by Maximo to load the result set should be near the bottom of your log file.

Related

How can I schedule a script in BigQuery?

At last BigQuery supports using ; in the queries, so I can write more than one query in one "block", if I seperate them with semicolon.
If I run the code manually, it works. But I cannot schedule that.
When I want to schedule, I have two choices:
(New) Web UI: I must give a destination table. If I don't do it, I could not save the scheduled query. But all my queries are updates and inserts with different "destination tables". Like these:
UPDATE project.exampledataset.a
SET date = current_date()
WHEN TRUE
;
INSERT INTO project.otherdataset.b
SELECT c,d
FROM project.otherdataset.c
So I cannot even make a scheduling in the Web UI.
Classic UI: I tried this, because the official documentary states, that I should leave the "destination table" blank, and Classic UI allows it. I can setup the scheduling, but it doesn't run, when it should. I get the error message in email "Error status: Dataset specified in the query ('') is not consistent with Destination dataset 'exampledataset'."
AIK scripting (and using semicolon) is a very new feature in BigQuery, but I hope someone can help me.
Yes, I know that I could schedule every query one by one, but I would like to resolve it with one big script.
Looks like the scheduled query was defined earlier with destination dataset defined with APPEND/TRUNCATE type transaction. While updating the same scheduled query to a DML query, GUI doesn't show the dataset field / table name to update to NULL. Hence this error is coming considering the previously set dataset and table name in the scheduled query.
Hence the fix is to delete the scheduled query and create it from scratch with DML query option. It worked for me.
Scripting is supported in scheduled query now. However, scripting query, when being scheduled, doesn't support setting a destination table for now. You still need to use DDL/DML to make change to existing table.
E.g.:
CREATE OR REPLACE TABLE destinationTable AS
SELECT *
FROM sourceTable
WHERE date >= maxDate
As of 2022, the BQ Console UI will let you create a new scheduled query without a destination dataset, but it won't let you update a prior SELECT to use DDL/DML block syntax. However, you can use the BigQuery Data Transfer API to update the destinationDatasetId field, via transferconfigs/patch. Use transferconfigs/list to get the configId for a given scheduled query.
Note that you can either use the in-browser API Explorer, if you have the appropriate credentials, or write a programmatic solution. Also seems useful for setting/updating any other fields, including renaming scheduled queries.

Trying to know which tables are executed in the oracle external application

I wonder if this could be possible or not.
I am using TOAD, connected to an oracle database (11g) and i have access to the oracle E-BUSINESS-SUITE application.
Basically, i want Toad to trace what sql are being executed by the oracle E-BUSINESS-SUITE application
I have this query:
SELECT nvl(ses.username,'ORACLE PROC')||' ('||ses.sid||')' USERNAME,
SID,
MACHINE,
REPLACE(SQL.SQL_TEXT,CHR(10),'') STMT,
ltrim(to_char(floor(SES.LAST_CALL_ET/3600), '09')) || ':'
|| ltrim(to_char(floor(mod(SES.LAST_CALL_ET, 3600)/60), '09')) || ':'
|| ltrim(to_char(mod(SES.LAST_CALL_ET, 60), '09')) RUNT
FROM V$SESSION SES,
V$SQLtext_with_newlines SQL
where SES.STATUS = 'ACTIVE'
and SES.USERNAME is not null
and SES.SQL_ADDRESS = SQL.ADDRESS
and SES.SQL_HASH_VALUE = SQL.HASH_VALUE
and Ses.AUDSID <> userenv('SESSIONID')
order by runt desc, 1,sql.piece
The oracle application looks like this:
I want to do this because i want to know which tables the oracle application is using in order to obtain the contact information for a certain customer. I mean, when a random guy is using the application, he put the account_number and click on "go". Thats what i need!, i want to know which tables are executed when the guy pressed the "go" button, i want to trace that.
I think that i could get the session_id from the guy that is using the oracle application and then, paste it on the query written above and start working on it.
Something like this:
If it is possible, how could i get the session_id from the guy that is using the oracle E-BUSINESS-SUITE Application?
Tracing the queries an active software app is running might take a while. As such it might be easier to dig the data out another way:
You want to know which table and column holds some data, like a user first name.
Generate something unique, like a GUID or some impossible name that never occurs in your db (like 'a87d5iw78456w865wd87s7dtjdi') and enter that as the First name using the UI. Save the data
Run this query against oracle:
SELECT
REPLACE(REPLACE(
'UNION ALL SELECT ''{t}'', ''{c}'' FROM {t} WHERE {c} = ''a87d5iw78456w865wd87s7dtjdi'' ',
'{t}', table_name),
'{c}', column_name
)
FROM USER_TAB_COLUMNS WHERE data_type like '%char%'
This is "an sql that writes an SQL" - It'll generate a result set that is basically a list of sql statements like this:
UNION ALL SELECT 'tablename', 'columnname' FROM tablename WHERE columnname = 'a87d5iw78456w865wd87s7dtjdi'
UNION ALL SELECT 'table2name', 'column2name' FROM table2name WHERE column2name = 'a87d5iw78456w865wd87s7dtjdi'
UNION ALL SELECT 'table3name', 'column3name' FROM table3name WHERE column3name = 'a87d5iw78456w865wd87s7dtjdi'
There will be one query for each column in each table in the db. Only CHARacter columns will be searched, by the way
Remove the first UNION ALL
Run it and wait a looong time while oracle basically searches every column in every table, in the db, for your weird name.
Eventually it produces an output like:
TABLE_NAME COLUMN_NAME
crm_contacts_info first_name
So you know your name a87d5iw78456w865wd87s7dtjdi was saved, by the UI, in crm_contacts_info.first_name
If it is possible, how could i get the session_id from the guy that is
using the oracle E-BUSINESS-SUITE Application?
Yes, this is definitely possible. First things first, you need to figure out which schema/username "the guy" is using. If you don't know, you can ask the guy or have him run some simple query (something like select user from dual; will work) to get that info.
Once you have the schema name, you can query the V$SESSION table to figure out the session id. Have the guy log in, then query the V$SESSION table. Your query would look something like this: select * from v$session where username ='[SCHEMA]'; where [SCHEMA] is the schema name that the guy is using to log in. This will give you the SID, serial #, status etc. You will need this info to trace the guy's session.
Generating a trace file for the session is relatively simple. You can start a trace for the entire database, or just for one session. Since you're only interested in the guy's session, you only need to trace that one. To begin the trace, you could use a command that looks something like this: EXEC DBMS_MONITOR.session_trace_enable(session_id=>[SESSIONID], serial_num=>[SERIAL#]); where [SESSIONID] and [SERIAL#] are the numbers you got from the previous step. Please keep in mind that the guy will need to be logged in for the session trace to give you any results.
Once the guy is logged in and you have enabled session trace, have the guy run whatever commands from the E-Business suite that you're interested in. Be aware that the more the guy (or the application) does while the trace is enabled, the more information you will have to get through to find whatever it is you're looking for. This can be a TON of data with applications. Just warning you ahead of time.
After the guy is finished doing the tasks, you need to disable the trace. This can be done using the DBMS_MONITOR package like before, only slightly different. The command would look something like this: EXEC DBMS_MONITOR.session_trace_disable(session_id=>[SESSIONID], serial_num=>[SERIAL#]); using the same [SESSIONID] and [SERIAL#] as before.
Assuming everything has been done correctly, this will generate the trace file. The reason why #thatjeffsmith mentioned server access is because you will need to access whatever server(s) the database lives on in order to get the trace file. If you do not have access to the server, you will need to work with a DBA or someone with access in order to get it. If you just need help figuring out where the trace file is, you could run the following query using the [SESSIONID] from before:
SELECT p.tracefile
FROM v$session s
JOIN v$process p ON s.paddr = p.addr
WHERE s.sid = [SESSIONID];
This should return a directory that looks similar to this: /u01/app/oracle/diag/rdbms/[database]/[instance]/trace/[instance]_ora_010719.trc
Simply navigate to that directory, pull the trace file using WinSCP, FileZilla, or the app of your choice, and that should do it.
Good luck, and hope this helps!
The SQLs executed from the EBS frontend are usually too fast to be seen in v$session. If a SQL is slower than a second (or if the snapshot timing is right), you would see it in v$active_session_history, which captures a snapshot of all active sessions every second.
The place you should look instead at is v$sqlarea, which can be done by SQL, via Toad using the Database->Monitor->SGA Trace/Optimization menu option or by our Blitz Report https://www.enginatics.com/reports/dba-sga-sql-performance-summary/.
This data however has information on module (i.e. which OAF page, Form, Concurrent etc.) and responsibility (action column) level only and it does not contain session or application user information.
The unique key is sql_id and plan_hash_value, which means that for SQLs executed by different modules and from different responsibilities, only the module executing it first will be shown.
If you sort the data by last_active_time and filter for the module in question, it's almost as good as a trace. Bind values used can be retrieved from v$sql_bind_capture, which above Blitz Report does as well.

SQL select by field acting weird

I am writing this post because I have encountered something truly weird with an SQL statement I am trying to make.
Context:
I am developing an app which uses JPA in the backend to persist / retrieve objects to/from a postgres database.
Problem:
During some tests I have noticed that when a particular user adds entries in the database and later I try to fetch them by his facebook id, the result is an empty list, even though the entries are there in the database. Doing a select statement on the database returns no rows. This does not happen with other users.
I have noticed that the mentioned user's facebook id is slightly longer then others. I do not know if and how this affects this situation.
Interesting part:
When during debugging I created an entry not programmatically, but manually with a SQL INSERT statement directly on the database (marked red on the 1st screenshot), I could fetch the data by facebook id both in my app and with a select statement.
Do you have any ideas what is going on here?
Please check the screenshots:
result of select * from table:
result of select * from table where user_facebook_id = 10215905779020408 :
Please help,
Thanks

How do I get a list of the reports available on any given reporting server?

I'm wanting to be able to input any given report server url and display a list of reports available on that server.
I found this question, and it's useful if I compile the project with a reference to a specific sql server (How do I get a list of the reports available on a reporting services instance). But (unless I'm just completely missing something which is possible) it doesn't show me how to do what I've stated above.
You could query the ReportServer database of your reporting server.
SELECT *
FROM dbo.Catalog
WHERE Type = 2
Should give you a list of all of the reports.
You can go to Web Service URL (note: not Report Manager URL). So if your main managing URL is http://server/Reports and Web Service URL is http://server/ReportServer - open the second one. It will give you raw listing of available items.
Note that this will include reports, datasources, folders etc.
Same as answer above,
with a use clause added at top
(in case this helps anyone ..) :
Use ReportServer;
SELECT *
FROM dbo.Catalog
WHERE Type = 2
Order by Name
I noted that select * above contains a field called Content which might be an issue for an export of result to excel .. So I tried a lesser list of columns :
Use ReportServer;
SELECT
ItemID,
Path,
Name,
ParentID,
Type,
Description,
Hidden,
CreatedByID,
CreationDate,
ModifiedByID,
ModifiedDate,
Parameter
FROM dbo.Catalog
WHERE Type = 2
Order by Name
The first answer above didn't seem to work for me ..
i.e. http://server/ReportServer
(replacing server with my reporting server name ..)
I get message "The webpage cannot be found" ..
Maybe this answer is version or security settings specific ?

SQL Server Reporting Services 2005 - How to Handle Empty Reports

I was wondering if it is possible to not attach Excel sheet if it is empty, and maybe write a different comment in the email if empty.
When I go to report delivery options, there's no such configuration.
Edit: I'm running SQL Server Reporting Services 2005.
Some possible workarounds as mentioned below:
MSDN: Reporting Services Extensions
NoRows and NoRowsMessage properties
I should look into these things.
I believe the answer is no, at least not out of the box. It shouldn't be difficult to write your own delivery extension given the printing delivery extension sample included in RS.
Yeah, I don't think that is possible. You could use the "NoRows" property of your table to display a message when no data is returned, but that wouldn't prevent the report from being attached. But at least when they opened the excel file it could print out your custom message instead of an empty document.
Found this somewhere else...
I have a clean solution to this problem, the only down side is that a system administrator must create and maintain the schedule. Try these steps:
Create a subscription for the report with all the required recipients.
Set the subscription to run weekly on yesterday's day (ie if today is Tuesday, select Monday) with the schedule starting on today's date and stopping on today's date. Essentially, this schedule will never run.
Open the newly created job in SQL Management Studio, go to the steps and copy the line of SQL (it will look something like this: EXEC ReportServer.dbo.AddEvent #EventType='TimedSubscription', #EventData='1c2d9808-aa22-4597-6191-f152d7503fff')
Create your own job in SQL with the actual schedule and use something like:
IF EXISTS(SELECT your test criteria...)
BEGIN
EXEC ReportServer.dbo.AddEvent #EventType=... etc.
END
I have had success with using a Data-Driven Subscription and a table containing my subscribers, with the data-driven subscription query looking like this:
SELECT * FROM REPORT_SUBSCRIBERS WHERE EXISTS (SELECT QUERY_FROM_YOUR_REPORT)
In the delivery settings, the recipient is the data column containing my email addresses.
If the inner query returns no rows, then no emails will be sent.
For your purposes, you can take advantage of the "Include Report" and "Comment" delivery settings.
I imagine that a data-driven subscription query like this will work for you:
SELECT 'person1#domain.com; person2#domain.com' AS RECIPIENTS,
CASE WHEN EXISTS (REPORT_QUERY) THEN 'TRUE' ELSE 'FALSE' END AS INCLUDE_REPORT,
CASE WHEN EXISTS (REPORT_QUERY) THEN 'The report is attached' ELSE 'There was no data in this report' END AS COMMENT
Then use those columns in the appropriate fields when configuring the delivery settings for the subscription.