Parsing apache logs using PostgreSQL - apache

This O'Reilly article gives an example of a PostgreSQL statement that parses an Apache log line:
INSERT INTO http_log(log_date,ip_addr,record)
SELECT CAST(substr(record,strpos(record,'[')+1,20) AS date),
CAST(substr(record,0,strpos(record,' ')) AS cidr),
record
FROM tmp_apache;
Obviously this only extracts the IP and timestamp fields. Is there a canonical statement for extracting all fields from a typical combined log format record? If there isn't, I will write one and I promise to post the result here!

OK, here is my solution:
insert into accesslog
select m[1], m[2], m[3],
(to_char(to_timestamp(m[4], 'DD/Mon/YYYY:HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS ')
|| split_part(m[4], ' ',2))::timestamp with time zone,
m[5], m[6]::smallint, (case m[7] when '-' then '0' else m[7] end)::integer, m[8], m[9] from (
select regexp_matches(record,
E'(.*) (.*) (.*) \\[(.*)\\] "(.*)" (\\d+) (.*) "(.*)" "(.*)"')
as m from tmp_apache) s;
It takes raw log lines from the table tmp_apache and extracts the fields (using the regexp) into an array.

Here is my somewhat more complete solution.
The apache log file should not contain invalid characters or backslashes. If necessary, you can remove these from the log file with:
cat logfile | strings | grep -v '\\' > cleanedlogfile
Then copy and parse the log file into postgres (m[1] to m[7] correspond to the regex groups in regexp_matches function):
-- sql for postgres:
drop table if exists rawlog;
create table rawlog (record varchar);
-- import data from log file
copy rawlog from '/path/to/your/apache/cleaned/log/file';
-- parse the rawlog into table accesslog
drop table if exists accesslog;
create table accesslog as
(select m[1] as clientip,
(to_char(to_timestamp(m[4], 'DD/Mon/YYYY:HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS ')
|| split_part(m[4], ' ',2))::timestamp with time zone as "time",
split_part(m[5], ' ', 1) as method,
split_part(split_part(m[5], ' ', 2), '?', 1) as uri,
split_part(split_part(m[5], ' ', 2), '?', 2) as query,
m[6]::smallint as status,
m[7]::bigint bytes
from
(select
regexp_matches(record, E'(.*) (.*) (.*) \\[(.*)\\] "(.*)" (\\d+) (\\d+)') as m
from rawlog) s);
-- optionally create indexes
create index accesslogclientipidx on accesslog(clientip);
create index accesslogtimeidx on accesslog(time);
create index accessloguriidx on accesslog(uri);

Related

Hive Concat function not working in Beeline

${beeline_url} --silent=true --showHeader=false --outputformat=csv2 --showWarnings=false -e "select concat('invalidate metadata ', trim(table_name) , '; refresh ', trim(table_name) ,';') from my_Table " > /home/table_list.csv
I'm trying to run this query ends up with error. The same query runs fine in hive, hue and even with beeline.
while using beeline, the below query gave results
0: jdbc:hive2://host> select concat("invalidate metadata ", trim(table_name)) from my_Table;
I tried storing the query in a file but it ends up in error.
${beeline_url} --silent=true --showHeader=false --outputformat=csv2 --verbose=false --showWarnings=false -f get_table_list.hql > /home/table_list.csv
where get_table_list.hql has
SELECT (CONCAT('invalidate metadata ', trim(table_name) , '; refresh ', trim(table_name) ,';')) from my_table;
Error:
Error: Error while compiling statement: FAILED: ParseException line
1:59 cannot recognize input near '' '' '' in select
expression (state=42000,code=40000)
Semicolons need to be shielded using \\:
SELECT (CONCAT('invalidate metadata ', trim(table_name) , '\\; refresh ', trim(table_name) ,'\\;')) from my_table;
Or replace them with \073:
SELECT (CONCAT('invalidate metadata ', trim(table_name) , '\073 refresh ', trim(table_name) ,'\073')) from my_table;
One of these workarounds should work.

How to Insert data using cursor into new table having single column containing XML type data in Oracle?

I'm able to insert values into table 2 from table 1 and execute the PL/SQL procedure successfully but somehow the output is clunky. I don't know why?
Below is the code :
create table airports_2_xml
(
airport xmltype
);
declare
cursor insert_xml_cr is select * from airports_1_orcl;
begin
for i in insert_xml_cr
loop
insert into airports_2_xml values
(
xmlelement("OneAirport",
xmlelement("Rank", i.Rank) ||
xmlelement("airport",i.airport) ||
xmlelement("Location",i.Location) ||
xmlelement("Country", i.Country) ||
xmlelement("Code_iata",i.code_iata) ||
xmlelement("Code_icao", i.code_icao) ||
xmlelement("Total_Passenger",i.Total_Passenger) ||
xmlelement("Rank_change", i.Rank_change) ||
xmlelement("Percent_Change", i.Percent_change)
));
end loop;
end;
/
select * from airports_2_xml;
Output:
Why it is showing &lt ,&gt in the output ? And why am I unable to see the output fully?
Expected output:
<OneAirport>
<Rank>3</Rank>
<Airport>Dubai International</Airport>
<Location>Garhoud</Location>
<Country>United Arab Emirates</Country>
<Code_IATA>DXB</Code_IATA>
<Code_ICAO>OMDB</Code_ICAO>
<Total_passenger>88242099</Total_passenger>
<Rank_change>0</Rank_change>
<Percent_Change>5.5</Percent_Change>
</OneAirport>
The main issue is how you are constructnig the XML. You have an outer XMLElement for OneAirport, and the content of that element is a single string.
You are generating individual XMLElements from the cursor fields, but then you are concenating those together, which gives you a single string which still has the angle brackets you're expecting. So you're trying to do something like, simplified a bit:
select
xmlelement("OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT("ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKSON</AIRPORT>')
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airp
and by default XMLElement() escapes entities in the passed-in values, so the angle-brackets are being converted to 'safe' equivalents like <. If it didn't do that, or you told it not to with noentityescaping:
select xmlelement(noentityescaping "OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT(NOENTITYESCAPING"ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKS
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airport></OneAirport>
then that would appear to be better, but you still actually have a single element with a single string (with characters that are likely to cause problems down the line), rather than the XML structure you almost certainly intended.
A simple way to get an zctual structure is with XMLForest():
xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
You don't need the cursor loop, or any PL/SQL; you can just do:
insert into airports_2_xml (airport)
select xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
from airports_1_orcl i;
The secondary issue is the display. You'll see more data if you issue some formatting commands, such as:
set lines 120
set long 32767
set longchunk 32767
Those will tell your client to retrieve and show more of the long (XMLType here) data, rather the default 80 characters it's giving you now.
Once you are generating a nested XML structure you can use XMLSerialize() to display that more readable when you query your second table.
Try this below block :
declare
cursor insert_xml_cr is select * from airports_1_orcl;
v_airport_xml SYS.XMLTYPE;
begin
for i in insert_xml_cr
loop
SELECT XMLELEMENT ( "OneAirport",
XMLFOREST(i.Rank as "Rank"
,i.airport as "Airport"
,i.Location as "Location"
,i.Country as "Country"
,i.code_iata as "Code_iata"
,i.code_icao as "code_icao"
,i.Total_Passenger as "Total_Passenger"
, i.Rank_change as "Rank_change"
,i.Percent_change as "Percent_Change"
))
into v_airport_xml
FROM DUAL;
insert into airports_2_xml values (v_airport_xml);
end loop;
end;

import array type into hana?

I am importing data into SAP HANA using the CSV files.
When I try to import a column which has an array type then it results in the following error
ARRAY type is not compatible with PARAMETER TYPE
For example
CREATE COLUMN TABLE "SCHEMA"."TABLE"
( 'ID' INT,
'SUBJECTS' INT ARRAY)
The above query creates the table and when I run
INSERT INTO "SCHEMA"."TABLE" VALUES (1,ARRAY(1,2,3))
It inserts successfully into the HANA database.
But when I try
INSERT INTO "SCHEMA"."TABLE" VALUES (1,"{1,2,3}")
It does not work.So how can I import the array values in the CSV file to the column in HANA database.
Array storage types can currently only be created by using the ARRAY() function.
You could construct the INSERT statement in a loop but you still need to construct the ARRAY() call for every record.
Ok, here's the example you asked for.
By now you should understand that there is no simple IMPORT command that would automatically insert arrays into a HANA table.
That leaves you with two options as I see it:
you write a loader program that reads your CSV file and parses
the array data {..., ... , ...} and makes INSERT statements with
ARRAY functions out of it.
or
You load the data in two steps:
2.1 Load the data from the CSV as-is and put the array data into a CLOB column.
2.2. Add the array columns to the table and run a loop that takes the CLOB data, replaces the curly brackets with normal brackets and creates a dynamic SQL statement.
Like so:
create column table array_imp_demo as (
select owner_name , object_type, to_clob( '{'|| string_agg ( object_oid, ', ') || '}' )array_text
from ownership
group by owner_name, object_type);
select top 10 * from array_imp_demo;
/*
OWNER_NAME OBJECT_TYPE ARRAY_TEXT
SYS TABLE {142540, 142631, 142262, 133562, 185300, 142388, 133718, 142872, 133267, 133913, 134330, 143063, 133386, 134042, 142097, 142556, 142688, 142284, 133577, 185357, 142409, 133745, 142902, 133308, 133948, 134359, 143099, 133411, 134135, 142118, 142578, 142762, 142306, 133604, 142429, 133764, 142928, 133322, 133966, 134383, 143120, 133443, 134193, 142151, 142587, 142780, 142327, 133642, 142448, 133805, 142967, 185407, 133334, 133988, 134624, 143134, 133455, 134205, 142173, 142510, 142606, 142798, 142236, 133523, 142359, 133663, 142465, 133825, 142832, 133175, 133866, 134269, 143005, 133354, 134012, 134646, 143148, 133497, 134231, 142195, 142526, 142628, 142816, 142259, 133551, 142382, 133700, 142493, 133855, 142862, 133235, 133904, 134309, 143051, 133373, 134029, 142082, 143306, 133513, 134255, 142216, 142553, 142684, 142281, 133572, 185330, 142394, 133738, 142892, 133299, 133929, 134351, 143080, 133406, 134117, 142115, 142576, 142711, 142303, 133596, 142414, 133756, 142922, 185399, 133319, 133958, 134368, 143115,
SYSTEM TABLE {154821, 146065, 146057, 173960, 174000, 173983, 144132, 173970}
_SYS_STATISTICS TABLE {151968, 146993, 147245, 152026, 147057, 147023, 151822, 147175, 147206, 151275, 151903, 147198, 147241, 151326, 151798, 152010, 147016, 147039, 147068, 151804, 151810, 147002, 147202, 151264, 151850, 147186, 147114, 147228, 151300, 151792, 151992, 147030, 147062, 151840, 147181, 147210, 151289, 151754, 149419, 150274, 147791, 150574, 147992, 149721, 150672, 148132, 149894, 147042, 148434, 149059, 150071, 147518, 148687, 149271, 150221, 150877, 147716, 148913, 150388, 147957, 149675, 150634, 148102, 149829, 148267, 148979, 149976, 147494, 148663, 151107, 149224, 150178, 147667, 148855, 149532, 150306, 147925, 149602, 150598, 148080, 149792, 148179, 149926, 147417, 148547, 149186, 150107, 147568, 148804}
_SYS_EPM TABLE {143391, 143427, 143354, 143385, 143411, 143376, 143398, 143367, 143414, 143344, 175755}
_SYS_REPO TABLE {145086, 151368, 171680, 145095, 152081, 169183, 151443, 149941, 154366, 143985, 145116, 152104, 151496, 169029, 177728, 179047, 145065, 169760, 178767, 151659, 169112, 169258, 153736, 177757, 174923, 145074, 150726, 151697, 169133, 178956, 145083, 169171, 168992, 145092, 177665, 169178, 151378, 169271, 178881, 174911, 154128, 143980, 145101, 152098, 151481, 177720, 152504, 145062, 151570, 169102, 154058, 145071, 170733, 151687, 169130, 145080, 171629, 169166, 178846, 145089, 149588, 151373, 177614, 143976, 145098, 152087, 151458, 149955, 178907, 154386, 145059, 169605, 151529, 169035, 178579, 151176, 179178, 145068, 150709, 151670, 169124, 174905, 177778, 154244, 145077, 170883, 169158, 144072, 152681, 144285, 154415, 144620, 145268, 168884, 144895, 143512, 151428, 168774, 143750, 152337, 168558, 144114, 149559, 152719, 144327, 144674, 145508, 168924, 144939, 143578, 152135, 143793, 152392, 168587, 144151, 152753, 144370, 144720, 145722, 168960, 144990, 143626, 152174, 143832, 152435, 168620, 144188, 152785,
_SYS_TASK TABLE {146851, 146847, 146856, 146231, 146143, 146839, 146854, 146834, 146430, 146464, 146167, 146505, 146205, 146257, 146384, 146313, 146420, 146356, 146454, 146155, 146495, 146193, 146525, 146244, 146368, 146296, 146410, 146346, 146443, 146148, 146484, 146181, 146517, 146234, 146275, 146395, 146325}
_SYS_AFL TABLE {177209, 176741, 176522, 177243, 176777, 176692, 176201, 177294, 176929, 177328, 176967, 177383, 177105, 177015, 176826, 177143, 177056, 176866, 177215, 176748, 176550, 176474, 177249, 176784, 176699, 176218, 146871, 177300, 176935, 177335, 176973, 177389, 177111, 177021, 176835, 177156, 177062, 176883, 185427, 177221, 176755, 176572, 176487, 177261, 176790, 176706, 176398, 177306, 176941, 177341, 176979, 177397, 177119, 177033, 176843, 177162, 177069, 176889, 177228, 176762, 176589, 177267, 176799, 176713, 176416, 177313, 176953, 177348, 176991, 177404, 177127, 177040, 176849, 177173, 177075, 176895, 177195, 176732, 176507, 177234, 176768, 177274, 176805, 176720, 176448, 177285, 176921, 177319, 176959, 177354, 176997, 177375, 177095, 177007, 176818, 177411, 177133, 177047, 176858, 177179, 177081, 176911, 177201, 176738, 176519, 177241, 176775, 176690, 176196, 177280, 176814, 176727, 176464, 177291, 176927, 177326, 176965, 177360, 177003, 177381, 177103, 177013, 176824, 177141, 177053, 176864, 177191, 177091,
_SYS_XB TABLE {146971, 146957}
DEVDUDE TABLE {167121, 165374, 182658, 156616, 173111, 181568, 174901, 183413, 184432, 183498, 183470, 184464, 155821, 173102, 183613, 184495, 155857, 166744, 180454, 184547, 156234, 172096, 166765, 165548, 184649, 183399, 184357, 184577, 183477, 181594, 183537, 181572, 167201, 184685, 185467, 183406, 184422, 184610, 183491, 155842, 172923, 157723, 182636, 167895, 183463, 184454, 183505, 165542, 183606, 184488, 155849, 172749, 157626, 184527, 183449, 166759, 184627, 182827, 184347, 184568, 157619, 172118, 183530, 181556, 167137, 184670, 182642, 184411, 184598, 183484, 155835, 183456, 183593, 181584, 167328, 183421, 184443, 183586, 184474, 155828, 166392, 183620, 184517, 183435, 183442, 183512, 166753, 184557, 156598, 172106, 183523, 166771, 166568, 184660, 182630, 184381, 184587, 183428, 157681, 182649, 167264, 168236}
ADMIN TABLE {158520, 158925, 158982, 158492, 158571, 158583, 158560, 158541, 158744}
*/
Ok, let's just assume that this is the data that you managed to load from your CSV file into a table and that the ARRAY_DATA column is a big CLOB.
Now, the data is in the table, but you need to put it into an ARRAY column.
alter table array_imp_demo add (array_data integer array);
The unconvenient part follows now: creating a new UPDATE command for each record to transform the CLOB data into a proper ARRAY() function call.
Be aware that this update command works correctly, because (OWNER, OBJECT_TYPE) are used as keys in this case.
do begin
declare vsqlmain nvarchar (50) :='UPDATE array_imp_demo set array_data = ARRAY ';
declare vsql nvarchar(5000);
declare cursor c_upd for
select OWNER_NAME, OBJECT_TYPE,
replace (replace (ARRAY_TEXT, '{', '(' ), '}', ')' ) array_data
from array_imp_demo;
for cur_row as c_upd do
vsql = :vsqlmain || cur_row.array_data;
vsql = :vsql || ' WHERE owner_name = ''' || cur_row.owner_name || '''';
vsql = :vsql || ' AND object_type = ''' || cur_row.object_type || '''';
exec :vsql;
end for;
end;
select * from array_imp_demo;
OWNER_NAME OBJECT_TYPE ARRAY_TEXT ARRAY_DATA
SYS TABLE {142540, 142631, 142262,... 142540, 142631, 142262,...
SYSTEM TABLE {154821, 146065, 146057,... 154821, 146065, 146057,...
_SYS_STATISTICS TABLE {151968, 146993, 147245,... 151968, 146993, 147245,...
_SYS_EPM TABLE {143391, 143427, 143354,... 143391, 143427, 143354,...
_SYS_REPO TABLE {145086, 151368, 171680,... 145086, 151368, 171680,...
After that you can drop the CLOB column.
Ok, that's about it from my side.
Have you tried using CTL method to import data. I hope that might help.

Error with single quotes inside text in select statement

Getting the error using Postgresql 9.3:
select 'hjhjjjhjh'mnmnmnm'mn'
Error:
ERRO:syntax error in or next to "'mn'"
SQL state: 42601
Character: 26
I tried replace single quote inside text with:
select REGEXP_REPLACE('hjhjjjhjh'mnmnmnm'mn', '\\''+', '''', 'g')
and
select '$$hjhjjjhjh'mnmnmnm'mn$$'
but it did not work.
Below is the real code:
CREATE OR REPLACE FUNCTION generate_mallet_input2() RETURNS VOID AS $$
DECLARE
sch name;
r record;
BEGIN
FOR sch IN
select schema_name from information_schema.schemata where schema_name not in ('test','summary','public','pg_toast','pg_temp_1','pg_toast_temp_1','pg_catalog','information_schema')
LOOP
FOR r IN EXECUTE 'SELECT rp.id as id,g.classified as classif, concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess
FROM ' || sch || '.report rp
INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id
INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid
INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key
WHERE g.classified = any (values(''BUG''),(''IMPROVEMENT''),(''REFACTORING''))
GROUP BY g.classified,rp.summary,rp.description,rp.id'
LOOP
IF r.classif = 'BUG' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess);
ELSIF r.classif = 'IMPROVEMENT' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/IMPROVEMENT/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess);
ELSIF r.classif = 'REFACTORING' THEN
EXECUTE format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/REFACTORING/'|| quote_ident(sch) || '-' || r.id || '.txt '' ',r.mess);
END IF;
END LOOP;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql STRICT;
select * FROM generate_mallet_input2();
Error:
ERRO: erro de sintaxe em ou próximo a "mailto"
LINHA 1: ...e.http.impl.conn.SingleClientConnManager$HTTPCLIENT-803).The new SSLSocketFactory.connectSocket method calls the X509HostnameVerifier with InetSocketAddress.getHostName() parameter. When the selected IP address has a reverse lookup name, the verifier is called with the resolved name, and so the IP check fails.4.0 release checked for original ip/hostname, but this cannot be done with the new connectSocket() method. The TestHostnameVerifier.java only checks 127.0.0.1/.2 and so masked the issue, because the matching certificate has both "localhost" and "127.0.0.1", but actually only "localhost" is matched. A test case with 8.8.8.8 would be better.I committed a slightly better workaround for the problem that does not require reverse DNS lookups.Oleg. I had to resort to a fairly ugly hack in order to fix the problem. A better solution would require changes to the X509HostnameVerifier API. I felt that deprecation of the X509HostnameVerifier interface was not warranted, as the use of an IP address for CN in a certificate was a hack by itself.Please review.Oleg . Even the second one requires the server presenting a trusted certificate. I don't see much difference beetween the two cases.. Wrong test. Try to connect to https://93.62.162.60:8443/. The certificate has CN=93.62.162.60, but the check is done for 93-62-162-60.ip23.fastwebnet.it. Hmm, my comment was not meant to revert the patch. The first scenario was already exploitable and still is. Your patch is the "correct" solution without breaking the API.But to avoid any security issue (including the ones already present) the API have to be changed.. I am not able to reproduce the problem. SSL connections to remote peers pass the default host name verification.---executing requestGET https://www.verisign.com/ HTTP/1.1[DEBUG] SingleClientConnManager - Get connection for route HttpRoute[{s}->https://www.verisign.com][DEBUG] DefaultClientConnectionOperator - Connecting to www.verisign.com/69.58.181.89:443[DEBUG] RequestAddCookies - CookieSpec selected: best-match[DEBUG] DefaultHttpClient - Attempt 1 to execute request[DEBUG] DefaultClientConnection - Sending request: GET / HTTP/1.1[DEBUG] headers - >> GET / HTTP/1.1[DEBUG] headers - >> Host: www.verisign.com[DEBUG] headers - >> Connection: Keep-Alive[DEBUG] headers - >> User-Agent: Apache-HttpClient/4.1 (java 1.5)[DEBUG] DefaultClientConnection - Receiving response: HTTP/1.1 200 OK[DEBUG] headers - << HTTP/1.1 200 OK[DEBUG] headers - << Date: Thu, 03 Feb 2011 20:14:35 GMT[DEBUG] headers - << Server: Apache[DEBUG] headers - << Set-Cookie: v1st=D732270AE4FC9F76; path=/; expires=Wed, 19 Feb 2020 14:28:00 GMT; domain=.verisign.com[DEBUG] headers - << Set-Cookie: v1st=D732270AE4FC9F76; path=/; expires=Wed, 19 Feb 2020 14:28:00 GMT; domain=.verisign.com[DEBUG] headers - << X-Powered-By: PHP/5.2.13[DEBUG] headers - << Keep-Alive: timeout=5, max=100[DEBUG] headers - << Connection: Keep-Alive[DEBUG] headers - << Transfer-Encoding: chunked[DEBUG] headers - << Content-Type: text/html[DEBUG] ResponseProcessCookies - Cookie accepted: "[version: 0][name: v1st][value: D732270AE4FC9F76][domain: .verisign.com][path: /][expiry: Wed Feb 19 15:28:00 GMT+01:00 2020]". [DEBUG] ResponseProcessCookies - Cookie accepted: "[version: 0][name: v1st][value: D732270AE4FC9F76][domain: .verisign.com][path: /][expiry: Wed Feb 19 15:28:00 GMT+01:00 2020]". [DEBUG] DefaultHttpClient - Connection can be kept alive for 5000 MILLISECONDS----------------------------------------HTTP/1.1 200 OKResponse content length: -1[DEBUG] SingleClientConnManager - Releasing connection org.apache.http.impl.conn.SingleClientConnManager$ConnAdapter#15ad5c6[DEBUG] DefaultClientConnection - Connection shut down---Are you using a custom SSL socket factory by any chance? Does it implement the LayeredSchemeSocketFactory interface?Oleg. Great work, good patch, thanks!. Well, I looked at the patch. It should fix the issue (though not completely, since the reverse lookup could give a wrong/unresolvable hostname), but as you said it's a crude hack, and this opens to other security issues. Unfortunately the clean fix requires API modification.You say using an IP address as CN is a hack, but actually using it as an ipAddress SubjectAlternativeName is perfectly valid.The security issues arise from the fact that httpclient tries to match dns generated data (reverse lookups and now also resolved hostnames) instead of what the user actually typed, opening to DNS poisoning or connection redirect attacks.First scenario:- user wants to connect to 1.2.3.4- DNS reverse lookup is xxx.yyy.zzz- a malicious proxy redirects the connection to server 4.3.2.1- server certificate contains CN or SAN set to xxx.yyy.zzz- All OK (but shouldn't)Second scenario:- user wants to connect to xxx.yyy.zzz- hacked DNS incorrectly resolve it to 1.2.3.4- server certificate has CN or SAN set to 1.2.3.4- The connection is established OK (but clearly shouldn't). Fair enough. I'll revert the patch and close the issue as WONTFIXOleg. The first scenario you are describing would also require involvement of green men from Mars and the malicious 4.3.2.1 server sending a certificate trusted by the client to be practical. Oleg', '', '''') as m ) To '/tmp/csv-temp/BUG/httpclient-HTTPCLIENT-1051.txt '
CONTEXTO: função PL/pgSQL generate_mallet_input2() linha 31 em comando EXECUTE
********** Error **********
ERRO: erro de sintaxe em ou próximo a "mailto"
SQL state: 42601
Context: função PL/pgSQL generate_mallet_input2() linha 31 em comando EXECUTE
The retrieved content is a long text on project issues in software repositories and can have html in this text. Html quotes are causing the problem.
It is not the content of the string that needs to be escaped, but its representation within the SQL you are sending to the server.
In order to represent a single ', you need to write two in the SQL syntax: ''. So, 'IMSoP''s answer' represents the string IMSoP's answer, '''' represents ', and '''''' represents ''.
But the crucial thing is you need to do this before trying to run the SQL. You can't paste an invalid SQL command into a query window and tell it to heal itself.
Automation of the escaping therefore depends entirely how you are creating that SQL. Based on your updated question, we now know that you are creating the SQL statement using pl/pgsql, in this format() call:
format('Copy( select REPLACE(''%s'', '''', '''''''') as m ) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt ''',r.mess)
Let's simplify that a bit to make the example clearer:
format('select REPLACE(''%s'', '''', '''''''') as m', r.mess)
If r.mess was foo, the result would look like this:
select REPLACE('foo', '', ''''') as m
This replace won't do anything useful, because the first argument is an empty string, and the second has 3 ' marks in; but even if you fixed the number of ' marks, it won't work. If the value of r.mess was instead bad'stuff, you'd get this:
select REPLACE('bad'stuff', '', ''''') as m
That's invalid SQL; no matter where you try to run it, it won't work, because Postgres thinks the 'bad' is a string, and the stuff that comes next is invalid syntax.
Think about how it will look if r.mess is SQL injection'); DROP TABLE users --:
select REPLACE('SQL injection'); DROP TABLE users; --', '', ''''') as m
Now we've got valid SQL, but it's probably not what you wanted!
So what you need to do is escape the ' marks in r.mess before you mix it into the string:
format('select '%s' as m', REPLACE(r.mess, '''', ''''''))
Now we're changing bad'stuff to bad''stuff before it goes into the SQL, and ending up with this:
select 'bad''stuff' as m
This is what we wanted.
There's actually a few better ways to do this, though:
Use the %L modifier to the format function, which outputs an escaped and quoted string literal:
format('select %L as m', r.mess)
Use the quote_literal() or quote_nullable() string functions instead of replace(), and concatenate the string together like you do with the filename:
'select ' || quote_literal(r.mess) || ' as m'
Finally, if the function really looks like it does in your question, you can avoid the whole problem by not using a loop at all; just copy each set of rows into a file using an appropriate WHERE clause:
EXECUTE 'Copy
SELECT concat(rp.summary,rp.description,string_agg(c.message, ''. '')) as mess
FROM ' || sch || '.report rp
INNER JOIN ' || sch || '.report_comment rc ON rp.id=rc.report_id
INNER JOIN ' || sch || '.comment c ON rc.comments_generatedid=c.generatedid
INNER JOIN ' || sch || '.gold_set g ON rp.id=g.key
WHERE g.classified = ''BUG'' -- <-- Note changed WHERE clause
GROUP BY g.classified,rp.summary,rp.description,rp.id
) To ''/tmp/csv-temp/BUG/'|| quote_ident(sch) || '-' || r.id::text || '.txt '''
';
Repeat for IMPROVEMENT and REFACTORING. I can't be sure, but in general, acting on a set of rows at once is more efficient than looping over them. Here, you'll have to do 3 queries, but the = any() in your original version is probably fairly inefficient anyway.
I'm taking a stab at this now that I think I know what you are asking.
You have a field in a table, that when you run SELECT <field> from <table> you are returned the result:
'This'is'a'test'
You want, intead, this result to look like:
'This''is''a''test'
So:
CREATE Table test( testfield varchar(30));
INSERT INTO test VALUES ('''This''is''a''test''');
You can run:
SELECT
'''' || REPLACE(Substring(testfield FROM 2 FOR LENGTH(testfield) - 2),'''', '''''') || ''''
FROM Test;
This will get only the bits inside the first and last single-quote, then it will replace the inner single-quotes with double quotes. Finally it concats back on single-quotes to the beginning and end.
SQL Fiddle:
http://sqlfiddle.com/#!15/a99e6/4
If it's not double single-quotes that you are looking for in the interior of your string result, then you can change the REPLACE() function to the appropriate character(s). Also, if it's not single-quotes you are looking for to encapsulate the string, then you can change those with the concatenation.
You need to escape your quoted ' marks:
select 'hjhjjjhjh''mnmnmnm''mn'

concat apostrophe to oracle sql query

Hi all I am looking for some pointers on how I can add an apostrophe to my query results on my first column.
My current query:
set verify off
set colsep "',"
set pagesize 2000
ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-YY-HH24:MI:SS';
spool /home/user/out.txt
select '[' || table1.collectdatetime as "['Date-Time",table1.cycletime as "'Time'" from table1 where interfacename='somename' and collectdatetime > (CURRENT_DATE - 1)
order by collectdatetime ASC;
Which results:
['Date-Time ','InCycleTime'
-------------------',-------------
[02-MAR-13-17:56:16', 29
What I am struglling with is getting the results to return and add an apostrophe after the [
['Date-Time ','InCycleTime'
-------------------',-------------
['02-MAR-13-17:56:16', 29
This is for an oracle 11.1.0.7 build. The data is being queried and parsed but I need to get that apostrophe issue worked out.
use this:
select '[''' || table1.collectdatetime as "['Date-Time",table1.cycletime as "'Time'" from table1 where interfacename='somename' and collectdatetime > (CURRENT_DATE - 1)
order by collectdatetime ASC;