How to get list of weblogic applications via wlst - weblogic

I try to make inventory for some weblogic servers and expect to do it with WSLT.
I was able to get list of applications (100+) as from example http://middlewaremagic.com/weblogic/?p=1473
connect('admin','xxxxxxx','t3://server1:7001')
cd('AppDeployments')
deplymentsList=cmo.getAppDeployments()
for app in deplymentsList:
domainConfig()
cd ('/AppDeployments/'+app.getName()+'/Targets')
mytargets = ls(returnMap='true')
domainRuntime()
cd('AppRuntimeStateRuntime')
cd('AppRuntimeStateRuntime')
for targetinst in mytargets:
curstate4=cmo.getCurrentState(app.getName(),targetinst)
print '----', app.getApplicationName(), ' | ', app.getVersionIdentifier(), ' | ', app.getModuleType(), ' | ', targetinst, ' | ', curstate4, ' | ', app.getSecurityDDModel(), ' | ', app.getAbsoluteSourcePath()
but i fail with some important info:
Context Root
Test Points
Can anybody help me to extract it?
And may be exists other way to enlist all applications on weblogic server with base properies?
// WebLogic Server Version: 12.2.1.0.0

Related

How can I check the maximum value from a set of tables in SQL Server (if possible)?

We have a set of databases (80 in total). Every single one has a table called tblProfessions. The tables are not standardized. For example:
EDIT: all the databases are on the same server.
The DB1.dbo.tblProfessions is like:
intProfessionCode
strProfessionDescription
1
lawyer
2
dentist
...
...
30
doctor
And the DB72.dbo.tblProfessions is as follows:
intProfessionCode
strProfessionDescription
1
designer
2
butcher
...
...
80
chef
Suppose I ran a script from DBO1 to DBO72, and I found that the biggest table has 80 entries (in this case the DBO72 is the biggest one).
By my limited knowledge, all I know is to run the below script database by database, and write it down in a spreadsheet manually:
SELECT MAX(intProfessionCode) FROM [DB].dbo.tblProfessions;
Is there a script to run and loop through all the tblProfessions and get the one with the most entries? All I want is the biggest number found.
Thanks in advance.
You should be able to do something like this:
WITH dat
AS
(
SELECT 'db1' AS database_name, MAX(intProfessionCode) AS max_intProfessionCode
FROM DB1.dbo.tblProfessions
UNION ALL
...
UNION ALL
SELECT 'db72' AS database_name, MAX(intProfessionCode) AS max_intProfessionCode
FROM DB72.dbo.tblProfessions
)
SELECT dat.db, dat.max_intProfessionCode
FROM dat
INNER JOIN (SELECT MAX(max_intProfessionCode) AS max_intProfessionCode_overall
FROM dat) y
ON dat.max_intProfessionCode = y.max_intProfessionCode_overall
For situations like this, I usually query the catalog to write the above script rather than typing it out:
WITH
dat AS
(
SELECT STRING_AGG('SELECT ''' + QUOTENAME(s.name) + ''' AS db,
MAX(intProfessionCode) AS max_intProfessionCode
FROM ' + QUOTENAME(s.name) + '.' + QUOTENAME('dbo') + '.' + QUOTENAME('tblProfessions') + '
UNION ALL',' ') AS s
FROM sys.databases s
WHERE s.name LIKE 'DB%' --EDIT APPROPRIATELY
)
SELECT 'WITH dat
AS
(' + SUBSTRING(s,1,LEN(s) - LEN(' UNION ALL')) + ')
SELECT dat.db, dat.max_intProfessionCode
FROM dat
INNER JOIN (SELECT MAX(max_intProfessionCode) AS max_intProfessionCode_overall
FROM dat) y
ON dat.max_intProfessionCode = y.max_intProfessionCode_overall' AS scrpt
FROM dat;
Make sure the above is only returning data for the appropriate databases by editing the WHERE clause in the CTE, then copy the output, paste elsewhere and run it to get your results.

How to test this vulnerability?

The query time is controllable using parameter value [' | case randomblob(1000000000) when not null then "" else "" end | '], which caused the request to take [142] milliseconds, parameter value [' | case randomblob(1000000000) when not null then "" else "" end | '], which caused the request to take [142] milliseconds, when the original unmodified query with value [24] took [66] milliseconds.
So I found a SQL injection vuln on my site and its
' | case randomblob(1000000000) when not null then "" else "" end | '
my site
https://sample.com/cdn-cgi/bm/cv/result?req_id=6506bd25b9e42c3e
I don't know how to see the database on sqlmap to see if its vuln is that serious how can I test this SQL injection manually??
the link of the portswigger would help to understand the issue. if your server is delayed because of the request, your db server is vulnerable for SQLi.
https://portswigger.net/web-security/sql-injection/blind/lab-time-delays
https://portswigger.net/web-security/sql-injection/blind/lab-time-delays-info-retrieval

Sessions are going Inactive status

I am running one sql query to fetch records from a table with multiple threads. Initially all the threads go in INACTIVE status and slowly it starts coming to ACTIVE status.
I have created HASH partition on my table based on the loc.
Also I have created an index over Item, loc columns.
select item || ',' || loc || ',' || stock_on_hand || ',' || on_order_qty || ',' || 0 || ',' || in_transit_qtyAS csv
from makro_m_oplcom_stg
where loc = ${loc} and ITEM_INV_IND = 0;
When I run this query in my DB for one loc it takes only one sec to fetch the records but when I am running it from shell script in multi threaded mode I am facing the session going into INACTIVE mode.
This is my wrapper script. Inside this I am calling another shell script that has the above mentioned query.
while [ $thread -le $SLOTS ] do
sleep 1
LOG_MESSAGE "$pgm_name Started by batch_makro_oplcom_process for thread $thread out of $SLOTS by ${USER}"
( ksh $pgm_name.ksh $UP $SLOTS $thread
if [ $? -ne 0 ]; then
LOG_MESSAGE "$pgm_name failed for thread: $thread, check the $pgm_name error file."
exit 1
else
LOG_MESSAGE "$pgm_name executed successfully for thread: $thread." fi )
& let thread=thread+1
done

Error with single quotes inside text in select statement

Getting the error using Postgresql 9.3:
select 'hjhjjjhjh'mnmnmnm'mn'
Error:
ERRO:syntax error in or next to "'mn'"
SQL state: 42601
Character: 26
I tried replace single quote inside text with:
select REGEXP_REPLACE('hjhjjjhjh'mnmnmnm'mn', '\\''+', '''', 'g')
and
select '$$hjhjjjhjh'mnmnmnm'mn$$'
but it did not work.
Below is the real code:
CREATE OR REPLACE FUNCTION generate_mallet_input2() RETURNS VOID AS $$
DECLARE
sch name;
r record;
BEGIN
FOR sch IN
select schema_name from information_schema.schemata where schema_name not in ('test','summary','public','pg_toast','pg_temp_1','pg_toast_temp_1','pg_catalog','information_schema')
LOOP
FOR r IN EXECUTE 'SELECT rp.id as id,g.classified as classif, concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess
FROM ' || sch || '.report rp
INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id
INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid
INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key
WHERE g.classified = any (values(''BUG''),(''IMPROVEMENT''),(''REFACTORING''))
GROUP BY g.classified,rp.summary,rp.description,rp.id'
LOOP
IF r.classif = 'BUG' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess);
ELSIF r.classif = 'IMPROVEMENT' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/IMPROVEMENT/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess);
ELSIF r.classif = 'REFACTORING' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/REFACTORING/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess);
END IF;
END LOOP;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql STRICT;
select * FROM generate_mallet_input2();
Error:
ERRO: erro de sintaxe em ou próximo a "mailto"
LINHA 1: ...e.http.impl.conn.SingleClientConnManager$HTTPCLIENT-803).The new SSLSocketFactory.connectSocket method calls the X509HostnameVerifier with InetSocketAddress.getHostName() parameter. When the selected IP address has a reverse lookup name, the verifier is called with the resolved name, and so the IP check fails.4.0 release checked for original ip/hostname, but this cannot be done with the new connectSocket() method. The TestHostnameVerifier.java only checks 127.0.0.1/.2 and so masked the issue, because the matching certificate has both "localhost" and "127.0.0.1", but actually only "localhost" is matched. A test case with 8.8.8.8 would be better.I committed a slightly better workaround for the problem that does not require reverse DNS lookups.Oleg. I had to resort to a fairly ugly hack in order to fix the problem. A better solution would require changes to the X509HostnameVerifier API. I felt that deprecation of the X509HostnameVerifier interface was not warranted, as the use of an IP address for CN in a certificate was a hack by itself.Please review.Oleg . Even the second one requires the server presenting a trusted certificate. I don't see much difference beetween the two cases.. Wrong test. Try to connect to https://93.62.162.60:8443/. The certificate has CN=93.62.162.60, but the check is done for 93-62-162-60.ip23.fastwebnet.it. Hmm, my comment was not meant to revert the patch. The first scenario was already exploitable and still is. Your patch is the "correct" solution without breaking the API.But to avoid any security issue (including the ones already present) the API have to be changed.. I am not able to reproduce the problem. SSL connections to remote peers pass the default host name verification.---executing requestGET https://www.verisign.com/ HTTP/1.1[DEBUG] SingleClientConnManager - Get connection for route HttpRoute[{s}->https://www.verisign.com][DEBUG] DefaultClientConnectionOperator - Connecting to www.verisign.com/69.58.181.89:443[DEBUG] RequestAddCookies - CookieSpec selected: best-match[DEBUG] DefaultHttpClient - Attempt 1 to execute request[DEBUG] DefaultClientConnection - Sending request: GET / HTTP/1.1[DEBUG] headers - >> GET / HTTP/1.1[DEBUG] headers - >> Host: www.verisign.com[DEBUG] headers - >> Connection: Keep-Alive[DEBUG] headers - >> User-Agent: Apache-HttpClient/4.1 (java 1.5)[DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 200 OK[DEBUG] headers - << HTTP/1.1 200 OK[DEBUG] headers - << Date: Thu, 03 Feb 2011 20:14:35 GMT[DEBUG] headers - << Server: Apache[DEBUG] headers - << Set-Cookie: v1st=D732270AE4FC9F76; path=/; expires=Wed, 19 Feb 2020 14:28:00 GMT; domain=.verisign.com[DEBUG] headers - << Set-Cookie: v1st=D732270AE4FC9F76; path=/; expires=Wed, 19 Feb 2020 14:28:00 GMT; domain=.verisign.com[DEBUG] headers - << X-Powered-By: PHP/5.2.13[DEBUG] headers - << Keep-Alive: timeout=5, max=100[DEBUG] headers - << Connection: Keep-Alive[DEBUG] headers - << Transfer-Encoding: chunked[DEBUG] headers - << Content-Type: text/html[DEBUG] ResponseProcessCookies - Cookie accepted: "[version: 0][name: v1st][value: D732270AE4FC9F76][domain: .verisign.com][path: /][expiry: Wed Feb 19 15:28:00 GMT+01:00 2020]". [DEBUG] ResponseProcessCookies - Cookie accepted: "[version: 0][name: v1st][value: D732270AE4FC9F76][domain: .verisign.com][path: /][expiry: Wed Feb 19 15:28:00 GMT+01:00 2020]". [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS----------------------------------------HTTP/1.1 200 OKResponse content length: -1[DEBUG] SingleClientConnManager - Releasing connection org.apache.http.impl.conn.SingleClientConnManager$ConnAdapter#15ad5c6[DEBUG] DefaultClientConnection - Connection shut down---Are you using a custom SSL socket factory by any chance? Does it implement the LayeredSchemeSocketFactory interface?Oleg. Great work, good patch, thanks!. Well, I looked at the patch. It should fix the issue (though not completely, since the reverse lookup could give a wrong/unresolvable hostname), but as you said it's a crude hack, and this opens to other security issues. Unfortunately the clean fix requires API modification.You say using an IP address as CN is a hack, but actually using it as an ipAddress SubjectAlternativeName is perfectly valid.The security issues arise from the fact that httpclient tries to match dns generated data (reverse lookups and now also resolved hostnames) instead of what the user actually typed, opening to DNS poisoning or connection redirect attacks.First scenario:- user wants to connect to 1.2.3.4- DNS reverse lookup is xxx.yyy.zzz- a malicious proxy redirects the connection to server 4.3.2.1- server certificate contains CN or SAN set to xxx.yyy.zzz- All OK (but shouldn't)Second scenario:- user wants to connect to xxx.yyy.zzz- hacked DNS incorrectly resolve it to 1.2.3.4- server certificate has CN or SAN set to 1.2.3.4- The connection is established OK (but clearly shouldn't). Fair enough. I'll revert the patch and close the issue as WONTFIXOleg. The first scenario you are describing would also require involvement of green men from Mars and the malicious 4.3.2.1 server sending a certificate trusted by the client to be practical. Oleg', '', '''') as m ) To '/tmp/csv-temp/BUG/httpclient-HTTPCLIENT-1051.txt '
CONTEXTO: função PL/pgSQL generate_mallet_input2() linha 31 em comando EXECUTE
********** Error **********
ERRO: erro de sintaxe em ou próximo a "mailto"
SQL state: 42601
Context: função PL/pgSQL generate_mallet_input2() linha 31 em comando EXECUTE
The retrieved content is a long text on project issues in software repositories and can have html in this text. Html quotes are causing the problem.
It is not the content of the string that needs to be escaped, but its representation within the SQL you are sending to the server.
In order to represent a single ', you need to write two in the SQL syntax: ''. So, 'IMSoP''s answer' represents the string IMSoP's answer, '''' represents ', and '''''' represents ''.
But the crucial thing is you need to do this before trying to run the SQL. You can't paste an invalid SQL command into a query window and tell it to heal itself.
Automation of the escaping therefore depends entirely how you are creating that SQL. Based on your updated question, we now know that you are creating the SQL statement using pl/pgsql, in this format() call:
format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess)
Let's simplify that a bit to make the example clearer:
format('select REPLACE(''%s'', '''', '''''''') as m', r.mess)
If r.mess was foo, the result would look like this:
select REPLACE('foo', '', ''''') as m
This replace won't do anything useful, because the first argument is an empty string, and the second has 3 ' marks in; but even if you fixed the number of ' marks, it won't work. If the value of r.mess was instead bad'stuff, you'd get this:
select REPLACE('bad'stuff', '', ''''') as m
That's invalid SQL; no matter where you try to run it, it won't work, because Postgres thinks the 'bad' is a string, and the stuff that comes next is invalid syntax.
Think about how it will look if r.mess is SQL injection'); DROP TABLE users --:
select REPLACE('SQL injection'); DROP TABLE users; --', '', ''''') as m
Now we've got valid SQL, but it's probably not what you wanted!
So what you need to do is escape the ' marks in r.mess before you mix it into the string:
format('select '%s' as m', REPLACE(r.mess, '''', ''''''))
Now we're changing bad'stuff to bad''stuff before it goes into the SQL, and ending up with this:
select 'bad''stuff' as m
This is what we wanted.
There's actually a few better ways to do this, though:
Use the %L modifier to the format function, which outputs an escaped and quoted string literal:
format('select %L as m', r.mess)
Use the quote_literal() or quote_nullable() string functions instead of replace(), and concatenate the string together like you do with the filename:
'select ' || quote_literal(r.mess) || ' as m'
Finally, if the function really looks like it does in your question, you can avoid the whole problem by not using a loop at all; just copy each set of rows into a file using an appropriate WHERE clause:
EXECUTE 'Copy
SELECT concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess
FROM ' || sch || '.report rp
INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id
INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid
INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key
WHERE g.classified = ''BUG'' -- <-- Note changed WHERE clause
GROUP BY g.classified,rp.summary,rp.description,rp.id
) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt '''
';
Repeat for IMPROVEMENT and REFACTORING. I can't be sure, but in general, acting on a set of rows at once is more efficient than looping over them. Here, you'll have to do 3 queries, but the = any() in your original version is probably fairly inefficient anyway.
I'm taking a stab at this now that I think I know what you are asking.
You have a field in a table, that when you run SELECT <field> from <table> you are returned the result:
'This'is'a'test'
You want, intead, this result to look like:
'This''is''a''test'
So:
CREATE Table test( testfield varchar(30));
INSERT INTO test VALUES ('''This''is''a''test''');
You can run:
SELECT
'''' || REPLACE(Substring(testfield FROM 2 FOR LENGTH(testfield) - 2),'''', '''''') || ''''
FROM Test;
This will get only the bits inside the first and last single-quote, then it will replace the inner single-quotes with double quotes. Finally it concats back on single-quotes to the beginning and end.
SQL Fiddle:
http://sqlfiddle.com/#!15/a99e6/4
If it's not double single-quotes that you are looking for in the interior of your string result, then you can change the REPLACE() function to the appropriate character(s). Also, if it's not single-quotes you are looking for to encapsulate the string, then you can change those with the concatenation.
You need to escape your quoted ' marks:
select 'hjhjjjhjh''mnmnmnm''mn'

Postgres COPY to CSV

I'm trying to export a query to a csv file using the COPY command in pgAdminIII.
I'm using this command:
COPY (SELECT CASE WHEN cast(click as int) = 1 THEN 1
ELSE -1
END
|| ' '
|| id
|| '|'
|| 'hr_' || substring(hour, 7,8)
--|| ' dw_x' + substring(datename(dw, substring(hour,1,6) ),1,2)
|| ' |dw_' || substring(to_char(to_date(substring(hour, 1,6),'YYMMDD'), 'dy'),1,2)
|| ' |C1_' || c1
|| ' |C21_' || c21
|| ' |C22_' || substring(to_char(to_date(substring(hour, 1,6),'YYMMDD'), 'dy'),1,2) || '_' || substring(hour, 7,8)
AS COL1
FROM clickthru.train limit 10)
TO 'C:\me\train.csv' with csv;
When I run it I get:
ERROR: could not open file "C:\me\train.csv" for writing: Permission denied
SQL state: 42501
I then tried using the following in psql:
grant all privileges on train to public
and then look at access privileges using \z which returns :
but am still getting the same error. I'm using postgresql 9.4 on a Windows 7 box. Any other suggestions?
copy writes the file under the user account of the Postgrs server process (it is a server side operation).
From the manual:
The file must be accessible by the PostgreSQL user (the user ID the server runs as) and the name must be specified from the viewpoint of the server
Under Windows this is the system's "Network Account" by default.
Apparently that account does not have write privileges on that directory.
You have two possible ways of solving that:
change the privileges on the directory to allow everybody full access
use the client side \copy command (in psql) instead