OEM Inventory and Usage Details SQL Script - sql

I would like to extract some information from the Oracle Enterprise Manager (OEM) repository database that pulls the below information:
Target/Database Name
Host Name/Server
Parent Name/Target Name
Parent Type/Target Type
Database Version
Database Size
Platform/OS
OS Version
OS Vendor
OS Hardware
OS Update Level
Address Length (bits)
Clock Frequency(MHz)
Memory Size(MB)
Local Disk Space(GB)
ASM Disk Space
Total Enabled CPU Cores
Total CPU Sockets
Total CPU Threads
Lifecycle
Department
Contact
Location
From OEM this information is found under the "Inventory and Usage Details" section. So far I have come up with a query that only pulls information for "Database Name", "Target Type", "Database Version", "Server", "OS", "OS_VERSION" and "Lifecycle":
select
db.TARGET_NAME DATABASE_NAME,
db.target_type TARGET_TYPE,
prop.PROPERTY_VALUE DATABASE_VERSION,
os.TARGET_NAME SERVER,
os.TYPE_QUALIFIER1 OS,
os.TYPE_QUALIFIER2 OS_VERSION,
nvl(lifecyle.property_value, 'NA') LIFECYLE
from
SYSMAN.MGMT$TARGET db
join
SYSMAN.MGMT$TARGET os on db.HOST_NAME = os.TARGET_NAME
join
SYSMAN.MGMT$TARGET_PROPERTIES prop on db.TARGET_GUID = prop.TARGET_GUID
left outer join
SYSMAN.MGMT$TARGET_PROPERTIES lifecyle on (db.TARGET_GUID=lifecyle.TARGET_GUID and lifecyle.PROPERTY_NAME='orcl_gtp_lifecycle_status')
where
prop.PROPERTY_NAME='Version'
and
(db.target_type='rac_database' or db.target_type='oracle_database' or db.target_type='oracle_pdb')
and
os.target_type='host'
order by 1;
I am trying to obtain/rewrite this query to include the rest of the other missing information.
Thanks

Related

Omitting the creation of a temporary access path with DB2/400 when accessing DDS-defined tables with SQL

I have two table definitions in DDS, compiled into *FILE objects and filled with data:
Kunpf:
A UNIQUE
A R KUNTBL
A FIRMA 60A ALWNULL
A KUNR 5S 0B
A KUNID 4S 0B
A K KUNR
A K KUNID
Kunsupf:
A R KUNSUTBL
A KUNID R B REFFLD(KUNID KUN/KUNPF)
A
A SUCHSTR 78A
A K SUCHSTR
A K KUNID
I'm using the following statement in interactive SQL (STRSQL):
SELECT DISTINCT FIRMA, KUNR FROM KUN/KUNPF
LEFT JOIN KUN/KUNSUPF ON (KUNPF.KUNID = KUNSUPF.KUNID)
WHERE SUCHSTR LIKE 'Freiburg%'
ORDER BY FIRMA
FOR READ ONLY
Everytime I execute this statement, I'm getting a considerable delay until the answer screen opens up. Beforehand a message is shown, stating that a temporary access path is being created.
How can I find out which/how this temporary access path is created? My goal is to have this access path made permanent so it doesn't need to be rebuilt with every invocation of this query.
I searched the net (especially the IBM site) but what I found out was mostly for DB2 on z/OS. The F4-Prompting facility in STRSQL doesn't provide help: I was searching for something like EXPLAIN SELECT from MySQL. The IBM DB2 Advanced Functions and Administration PDF states that there's a debug mode but it seems that it is only available from some (old) Windows tool I don't remember to have.
I'm utilizing V4R5, if this is relevant.
to see the access path on the green screen...
strdbg
strsql
run your statement
exit f3
enddbg
dspjoblog
the access path messages are at the bottom of the log f10 f18 afaik
v4r5??? That's like 20 years old...
For the IBM i, the "Run SQL Scripts" component of the old Client Access For Windows iSeries Navigator component and the new Access Client Solutions (ACS) contains Visual Explain (VE).
Luckily it seems though it was added to v4r5
http://ibmsystemsmag.com/ibmi/administrator/db2/database-performance-tuning-with-visual-explain/
Just start iNav, right click on "Database" and select "Run SQL Scripts"
Paste your query there and click "Visual Explain" -->"Run and Explain"
(or the corresponding button)
Optionally, in green screen.
Do a STRDBG to enter debug mode, F12 to continue and then go into STRSQL. The Db optimizer will then output additional messages into the joblog giving you more information about what it is doing..

Which is the correct Disk Used Size information getting from M_DISK_USAGE or M_DISKS view?

There are 2 System Views provided by SAP Hana Database. M_DISK_USAGE and M_DISK
While comparing the two tables I came to know that USED_SIZE information of DATA,LOG,.....Usage Types are different in both tables.
Can someone please help me to understand, If I want to Monitor the Disk Usage of all usage types at the current time which view can I use to get this information?
The question really is what you want to know.
If you want to know how large the filesystems of the HANA volumes is and how much space is left there, then M_DISKS is the right view:
show free disk space in KiB:
/hana/data/SK1> df -BK .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 403469844K 134366892K 269102952K 34% /hana
compared to the M_DISKS view (sizes converted from bytes to KiB):
DISK_ID DEVICE_ID HOST PATH SUBPATH FILESYSTEM_TYPE USAGE_TYPE TOTAL_SIZE_KB USED_SIZE_KB
1 113132 skullbox /hana/data/SK1/ mnt00001 xfs DATA 403469844 134366892
2 113132 skullbox /usr/sap/SK1/HDB01/backup/data/ xfs DATA_BACKUP 403469844 134366892
3 113132 skullbox /hana/log/SK1/ mnt00001 xfs LOG 403469844 134366892
4 113132 skullbox /usr/sap/SK1/HDB01/backup/log/ xfs LOG_BACKUP 403469844 134366892
5 113132 skullbox /usr/sap/SK1/HDB01/skullbox/ xfs TRACE 403469844 134366892
M_DISK_USAGE on the other hand shows what the HANA instance allocated in total grouped by usage types.

SL Baremetal order with RAID disks ---> The location provided for this order is invalid

I try to order a Baremetal machine but I get that error
root#ubuntu:~# ruby array.rb
/usr/local/rvm/rubies/ruby-1.9.3-p551/lib/ruby/1.9.1/xmlrpc/client.rb:414:in `call': The location provided for this order is invalid. (XMLRPC::FaultException)
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/softlayer_api-3.0.2/lib/softlayer/Service.rb:267:in `call_softlayer_api_with_params'
from /usr/local/rvm/gems/ruby-1.9.3-p551/gems/softlayer_api-3.0.2/lib/softlayer/Service.rb:196:in `method_missing'
from array.rb:20:in `<main>'
I've also tried to use lon02 (id 358694) but I catch the same error.
I want to automate the order for following configuration:
Server:
Dual Intel Xeon E5-2650 (8 Cores, 2.00 GHz)
Second Processor Intel Xeon E5-2650 (8 Cores, 2.00 GHz)
RAM:
16 GB
Operating System:
Ubuntu Linux 14.04 LTS Trusty Tahr - Minimal Install (64 bit)
Disk Controller:
RAID
10 Hard Drives of type:
800 GB SSD (10 DWPD)
Disk 0-3 Raid 6. Disk 4 HotSpare - Disk 5-8 Raid 6 Disk 9 HotSpare
Public Bandwidth:
500 GB Bandwidth
Uplink Port Speeds:
1 Gbps Redundant Public & Private Network Uplinks
Private Network Port 1 Gbps Redundant Private Uplinks
Public Network Port 1 Gbps Redundant Public Uplinks
Power Supply:
Redundant Power Supply
Monitoring:
Host Ping and TCP Service Monitoring
Response:
Automated Reboot from Monitoring
VPN Management - Private Network:
Unlimited SSL VPN Users & 1 PPTP VPN User per account
Vulnerability Assessments & Management:
Nessus Vulnerability Assessment & Reporting
Primary IP Addresses:
1 IP Address
Notification:
Email and Ticket
Remote Management:
Reboot / KVM over IP
This is my code snippet:
require 'rubygems'
require 'softlayer_api'
$SL_API_USERNAME = "-------"
$SL_API_KEY = "------------------"
client = SoftLayer::Service.new("SoftLayer_Product_Order");
product={
"complexType"=>"SoftLayer_Container_Product_Order_Hardware_Server",
"quantity"=>1,
"hardware"=>[{"hostname"=>"dysa-ca-east-0-baremetal-uaa-test-ai", "domain"=>"dysa-ca-east", "primaryNetworkComponent"=>{"networkVlan"=>{"id"=>"888013"}}, "primaryBackendNetworkComponent"=>{"networkVlan"=>{"id"=>"888015"}}}],
"location"=>"448994",
"packageId"=>142,
"prices"=>[{"id"=>29899}, {"id"=>29899}, {"id"=>37622}, {"id"=>34742}, {"id"=>36037}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>35529}, {"id"=>33867}, {"id"=>26109}, {"id"=>25014}, {"id"=>34807}, {"id"=>34241}, {"id"=>32500}, {"id"=>34996}, {"id"=>33483}, {"id"=>35310}], "useHourlyPricing"=>false,
"storageGroups"=>[{"arrayTypeId"=>"4", "hardDrives"=>"[0,1,2,3]", "hotSpareDrives"=>"[4]"}, {"arrayTypeId"=>"4", "hardDrives"=>"[5,6,7,8]", "hotSpareDrives"=>"[9]"}]}
client.verifyOrder(product)
Could you pls help me?
Thanks a lot
This is an issue, the package you are using does not have any available datacenter you can verify that by running this method:
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getRegions
here an example using rest:
https://api.softlayer.com/rest/v3/SoftLayer_Product_Package/142/getRegions
method: GET
That's why you are getting the error: The location provided for this order is invalid.
I suggest you open a ticket in Softlayer's Portal and report the issue, meanwhile you can try another package (if you want it).
Regards
This request may help you to get valid item prices according to location:
https://[username]:[apikey]#api.softlayer.com/rest/v3/SoftLayer_Product_Package/200/getItemPrices?objectMask=mask[id,item[keyName,description],pricingLocationGroup[locations[id, name, longName]]]
Method: GET
Where:
These price ids with a locationGroupId = null are considered as "a standard price" and the API will internally switch the prices for the customer. But it is recommend first to execute first the verifyOrder in order to see if the wanted order is ok (the fee can vary).
To get more details about this, please review:
http://sldn.softlayer.com/blog/cmporter/Location-based-Pricing-and-You
This is another example using itemPrices in PHP.
https://softlayer.github.io/php/get_required_price_id/
If the problem persists, please open a SL ticket because it could be a bug.
References:
http://sldn.softlayer.com/blog/cmporter/Location-based-Pricing-and-You
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getItemPrices
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package/getAllObjects
http://sldn.softlayer.com/reference/services/SoftLayer_Product_Package
partially resolved used packageId=251 but the ordered bare metal have 10xRAID 0 each with only 1 disk instead of 2xRAID 6 :((((
What is wrong in the following code?
require 'rubygems'
require 'softlayer_api'
$SL_API_USERNAME = "-----"
$SL_API_KEY = "-----"
client = SoftLayer::Service.new("SoftLayer_Product_Order");
product={
"complexType"=>"SoftLayer_Container_Product_Order_Hardware_Server",
"quantity"=>1,
"hardware"=>[{"hostname"=>"dys1-0-baremetal-uaa",
"domain"=>"softlayer.com",
"primaryNetworkComponent"=>{"networkVlan"=>{"id"=>"MY_VLAN_ID"}},
"primaryBackendNetworkComponent"=>{"networkVlan"=>{"id"=>"MY_VLAN_ID"}}}],
"location"=>"358694",
"packageId"=>251,
"prices"=>[{"id"=>50675},
{"id"=>37622},
{"id"=>49427},
{"id"=>141945},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50143},
{"id"=>50263},
{"id"=>26109},
{"id"=>25014},
{"id"=>34807},
{"id"=>50223},
{"id"=>34241},
{"id"=>32500},
{"id"=>34996},
{"id"=>33483},
{"id"=>35310}],
"useHourlyPricing"=>false,
"storageGroups"=>[{"arrayTypeId"=>4,
"hardDrives"=>[0,1,2,3],
"hotSpareDrives"=>[4]},
{"arrayTypeId"=>4,
"hardDrives"=>[5,6,7,8],
"hotSpareDrives"=>[9]}]
}
p client.verifyOrder(product)

SQL Server memory usage more than 3 GB on 64 bit machine

My SQL Server process memory usage show more than 3 GB on 64 bit machine.
I tried to find the problem with
SELECT
[object_name], [counter_name],
[instance_name], [cntr_value]
FROM sys.[dm_os_performance_counters]
WHERE [object_name] = 'SQLServer:Buffer Manager'
and result was:
Buffer cache hit ratio 6368
Buffer cache hit ratio base 6376
Page lookups/sec 438640376
Free list stalls/sec 182
Free pages 215468
Total pages 442368
Target pages 442368
Database pages 196000
Reserved pages 0
Stolen pages 30900
Lazy writes/sec 1510
Readahead pages/sec 1204816
Page reads/sec 1384292
Page writes/sec 765586
Checkpoint pages/sec 129207
AWE lookup maps/sec 0
AWE stolen maps/sec 0
AWE write maps/sec 0
AWE unmap calls/sec 0
AWE unmap pages/sec 0
Page life expectancy 119777
As this seems to be too much page lookup then I executed this query
SELECT 1.0*cntr_value /
(SELECT 1.0*cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Batch Requests/sec')
AS [PageLookupPct]
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Page lookups/sec'
result for this query was more than 350.
One more metric was tried as
SELECT (1.0*cntr_value/128) /
(SELECT 1.0*cntr_value
FROM sys.dm_os_performance_counters
WHERE object_name like '%Buffer Manager%'
AND lower(counter_name) = 'Page life expectancy')
AS [BufferPoolRate]
FROM sys.dm_os_performance_counters
WHERE object_name like '%Buffer Manager%'
AND counter_name = 'total pages'
result was 0.029.
I picked this from Tim Ford blogs.
What could be the possible cause of memory usage? Is it bad query plan or should I look on some other area?
[Update]
MEMORYCLERK_SQLBUFFERPOOL is using 3573824 (VM reserved) 3573824 (VM committed)
Check your setting under Properties \ Memory \ Maximum Server Memory.
See the answer on this post: Seeing High Memory Usage In SQL Server 2012 which outlines (roughly) the fact that SQL Server will claim what it can based on the maximum available memory. Unless you need this memory for something else, do not worry about it.

recently,the sybase database report some errors,I don't know how to solve,

Recently I used the sybase database and it report some errors, I searched many information and Expand configuration, but it's still happening.
My server computer memory and the cpu usage is normal, I don't know if machines system resources are enough for it.
This is Part of my sybase error log:
02:00000:00000:2011/12/31 10:13:21.60 kernel nl__read_defer: read failed on socket 36.
02:00000:00000:2011/12/31 10:13:25.92 kernel nl__read_defer: read failed on socket 154.
system resources not enough,Can not complete the requested service
06:00000:00000:2011/12/31 10:13:32.70 kernel nl__read_defer: read failed on socket 164.
system resources not enough,Can not complete the requested service
00:00000:00000:2011/12/31 10:13:34.40 kernel nl__read_defer: read failed on socket 34.
system resources not enough,Can not complete the requested service
....more
00:00000:00014:2012/01/04 10:25:03.59 kernel NT operating system error 1450 in module 'e:\ase1253\porttree\svr\sql\generic\ksource\strmio\n_winsock.c' at line 1987
00:00000:00014:2012/01/04 10:25:03.59 kernel NT operating system error 1450 in module 'e:\ase1253\porttree\svr\sql\generic\ksource\strmio\n_winsock.c' at line 1987
....more
ct_results(): network packet layer: internal net library error: Net-Library operation terminated due to disconnect.
my sybase product is ASE,and my machine memory is 4G
there are some configuration about sybase which I changed:
[Named Cache:default data cache]
cache size = 1400M
[Meta-Data Caches]
number of open databases = 50
number of open objects = 5000
number of open indexes = 5000
[Disk I/O]
number of devices = 20
[Physical Memory]
max memory = 2000000
[Processors]
max online engines = 8
number of engines at startup = 8
[SQL Server Administration]
procedure cache size = 400000
[User Environment]
number of user connections = 200
[Lock Manager]
number of locks = 150000
[Rep Agent Thread Administration]
enable rep agent threads = 1
the other configuration are Default
Any help would be greatly appreciated.