In PDFBox I would like to group the Test1 and Test2 texts together. So if the texts are selected and copied in the generated pdf, they will be selected in the right order Test1 Test2 Test3 (not Test1 Test3 Test2).
content.setLeading(15);
content.beginText();
content.newLineAtOffset(100, 100);
content.showText("Test1");
content.newLine();
content.showText("Test2");
content.endText();
content.beginText();
content.newLineAtOffset(300, 100);
content.showText("Test3");
content.endText();
Related
I want to display images which are stored under Shared Components -> Static Application Files. In my table I added a column images with this content:
image
______________________
#APP_FILES#image01.jpg
#APP_FILES#image02.jpg
...
I select the data in my SQL query for a classic report:
select ID,
'<img src="'||image||'"height="50" width="200">' as image
from TEST_TABLE
And set the column image with disabled "Escape special characters".
The image does not appear:
How an I display images in a classic report column which are stored under Shared Components?
Here is an example to show an image (same image) for every employee in the emp table. I used HTML Expressions.
The select:
select ename, '#APP_FILES#mickey550x560.jpg' as image_file from emp
set column "image_file" to "Plain Text" with column formatting > HTML Expression:
<img src="#IMAGE_FILE#" "height="55" width="56" alt="image">
Result:
In Vertica DB we have an attribute column that is either comma-separated or enclosed within inverted commas (double and single applicable). When we do an s3 export query on Vertica DB we get the CSV file but when we validate it through an online CSV validator or s3 select query formatted we get an error.
SELECT S3EXPORT(* USING PARAMETERS url='xxxxxxxxxxxxxxxxxxxx.csv', delimiter=',', enclosed_by='\"', prepend_hash=false, header=true, chunksize='10485760'....
and suggestions on how to resolve this issue?
PS: Reading manually every row and checking columns is not the choice
example attributes:-
select uid, cid, att1 from table_name where uid in (16, 17, 15);
uid | cid | att1
-----+-------+---------------------
16 | 78940 | yel,k
17 | 78940 | master#$;#
15 | 78940 | "hello , how are you"
S3EXPORT() is deprecated as from Version 11. We are at Version 12 currently.
Now, you would export like so:
EXPORT TO DELIMITED(
directory='s3://mybucket/mydir'
, filename='indata'
, addHeader='true'
, delimiter=','
, enclosedBy='"'
) OVER(PARTITION BEST) AS
SELECT * FROM indata;
With your three lines, this would generate the below:
dbadmin#gessnerm-HP-ZBook-15-G3:~$ cat /tmp/export/indata.csv
uid,cid,att1
15,78940,"\"hello \, how are you\""
16,78940,"yel\,k"
17,78940,"master#$;#"
Do you need a different format?
Then, try this : ...
EXPORT TO DELIMITED(
directory='/tmp/csv'
, filename='indata'
, addHeader='true'
, delimiter=','
, enclosedBy=''
) OVER(PARTITION BEST) AS
SELECT
uid
, cid
, QUOTE_IDENT(att1) AS att1
FROM indata;
... to get this:
dbadmin#gessnerm-HP-ZBook-15-G3:~$ cat /tmp/csv/indata.csv
uid,cid,att1
15,78940,"""hello \, how are you"""
16,78940,"yel\,k"
17,78940,"master#$;#"
I'm manager for a project from Vietnam search in SQL. Some thing like this
select N'Trên bootrap , click nút setting ko thấy phản ứng' as title into #test
select * from #test where title like N'%Trên%'
select * from #test where title like N'%ứng%'
-------------------
select * from #test where title like N'%ứng%'
But today my customer give me some characters like above.
From sql select above you can see:
Trên = Trên => it Ok. But
ứng <> ứng (because user input from orther computer). I don't know how can solove this situation.
You can try this
SELECT * FROM #test WHERE title LIKE "%ứng%" COLLATE utf8mb4_german2_ci
utf8mb4_unicode_ci would probably work as well.
Today, I was found a problem with schema.ini, here is my example:
Query:
SELECT *
FROM OpenDataSource('Microsoft.ACE.OLEDB.12.0','Data Source="C:\Temp\";User ID=;Password=;Extended properties="Text;HDR=Yes;FMT=Delimited()"')...[ve01#csv]
ve01.csv file content:
Record No.|Sales Target Link
00000000|00000000
00000001|00000000
00000002|00000003
00000003|00000007
00000004|00000008
00000005|00000000
schema.ini file:
---------------------------
[VE01.csv]
ColNameHeader=True
Format=Delimited(|)
TextDelimiter=
Col1=Record_No Text
Col2=Sales_Target_Link Text
---------------------------
The query will return data correctly seperated by (|) if I add a blank line at the top of schema.ini like below:
---------------------------
[VE01.csv]
ColNameHeader=True
Format=Delimited(|)
TextDelimiter=
Col1=Record_No Text
Col2=Sales_Target_Link Text
---------------------------
Can someone please help?
Thanks
I have a few questions related to batch insert in Spring.
When I do something like that:
public void save(Car car) {
String sql1 = "insert into Car values (1, 'toyota')";
String sql2 = "insert into Car values (2, 'chrysler')";
String sql3 = "insert into Car values (3, 'infinity')";
String[] tab = new String[2];
tab[0] = sql1;
tab[1] = sql2;
tab[2] = sql3;
getJdbcTemplate().update(sql1);
getJdbcTemplate().update(sql2);
getJdbcTemplate().update(sql3);
// getJdbcTemplate().batchUpdate(tab);
}
in mysql log file I see:
1 Query insert into Car values (1, 'toyota')
2 Query insert into Car values (2, 'chrysler')
3 Query insert into Car values (3, 'infinity')
so we have 3 insert statements (and 3 network calls).
When I use getJdbcTemplate().batchUpdate(tab) in the log file I can see:
1094 [main] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing SQL batch update of 3 statements
1110 [main] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource
1110 [main] DEBUG org.springframework.jdbc.datasource.DriverManagerDataSource - Creating new JDBC DriverManager Connection to [jdbc:mysql://localhost:3306/test?useServerPrepStmts=true]
1610 [main] DEBUG org.springframework.jdbc.support.JdbcUtils - JDBC driver supports batch updates
and in mysql log:
1 Query insert into Car values (1, 'toyota')
1 Query insert into Car values (2, 'chrysler')
1 Query insert into Car values (3, 'infinity')
I understand that in background addBatch method is invoked on statement object and all these operations are performed simultaneously. Additional benefit is reduction of network calls. Is my reasoning correct?
I'm looking for something similar in HibernateTemplate. I can do it in this way:
getHibernateTemplate().saveOrUpdateAll(Arrays.asList(new Car(4, "infinity"), new Car(5, "ford")));
In that case, in the log file I can see:
3 Prepare select car_.id, car_.name as name0_ from Car car_ where car_.id=?
3 Prepare select car_.id, car_.name as name0_ from Car car_ where car_.id=?
3 Prepare insert into Car (name, id) values (?, ?)
So it seems that everything is done in one shot as it was done for getJdbcTemplate().updateBatch(...)
Please correct me if I'm wrong.
To produce result similar to Hibernate (with prepared statment), you should use JdbcTemplate.batchUpdate(String, BatchPreparedStatementSetter). Something like this:
final List<Car> cars = Arrays.asList(...);
getJdbcTemplate().batchUpdate("insert into Car (name, id) values (?, ?);",
new BatchPreparedStatementSetter() {
private int i = 0;
public int getBatchSize() { return cars.size(); }
public void setValues(PreparedStatement ps) {
ps.setString(1, cars.get(i).getName());
ps.setInt(2, cars.get(i).getId());
i++;
}
});