I'm using p6spy to log the sql statements generated by my program. The format for the outputted spy.log file looks like this:
current time|execution time|category|statement SQL String|effective SQL string
I'm just wondering if anyone knows if there's a way to alter the spy.properties file and have only the last column, the effective SQL string, output to the spy.log file? I've looked through the properties file but haven't found anything that seems to support this.
Thanks!
In spy.properties there is a property called logMessageFormat that you can set to a custom implementation of MessageFormattingStrategy. This works for any type of logger (i.e. file, slf4j etc.).
E.g.
logMessageFormat=my.custom.PrettySqlFormat
An example using Hibernate's pretty-printing SQL formatter:
package my.custom;
import org.hibernate.jdbc.util.BasicFormatterImpl;
import org.hibernate.jdbc.util.Formatter;
import com.p6spy.engine.spy.appender.MessageFormattingStrategy;
public class PrettySqlFormat implements MessageFormattingStrategy {
private final Formatter formatter = new BasicFormatterImpl();
#Override
public String formatMessage(int connectionId, String now, long elapsed, String category, String prepared, String sql) {
return formatter.format(sql);
}
}
There is no such option provided to achieve it via configuration only yet. I think you have 2 options here:
fill a new bug/feature request report (which could bring benefit to others using p6spy as well) on: https://github.com/p6spy/p6spy/issues?state=open or
provide custom implementation.
For the later option, I believe you could achieve it via your own class (depending on the logger you use, let's assume you use Log4jLogger).
Well, if you check relevant part of the Log4jLogger github as well as sourceforge version, your implementation should be rather straightforward:
spy.properties:
appender=com.EffectiveSQLLog4jLogger
Implementation itself could look like this:
package com;
import com.p6spy.engine.logging.appender.Log4jLogger;
public class EffectiveSQLLog4jLogger extends Log4jLogger {
public void logText(String text) {
super.logText(getEffectiveSQL(text));
}
private String getEffectiveSQL(String text) {
if (null == text) {
return null;
}
final int idx = text.lastIndexOf("|");
// non-perfect detection of the exception logged case
if (-1 == idx) {
return text;
}
return text.substring(idx + 1); // not sure about + 1, but check and see :)
}
}
Please note the implementation should cover github (new project home, no version released yet) as well as sourceforge (original project home, released 1.3 version).
Please note: I didn't test the proposal myself, but it could be a good starting point and from the code review itself I'd say it could work.
I agree with #boberj, we are used to having logs with Hibernate formatter, but don't forget about batching, that's why I suggest to use:
import com.p6spy.engine.spy.appender.MessageFormattingStrategy;
import org.hibernate.engine.jdbc.internal.BasicFormatterImpl;
import org.hibernate.engine.jdbc.internal.Formatter;
/**
* Created by Igor Dmitriev on 1/3/16
*/
public class HibernateSqlFormatter implements MessageFormattingStrategy {
private final Formatter formatter = new BasicFormatterImpl();
#Override
public String formatMessage(int connectionId, String now, long elapsed, String category, String prepared, String sql) {
if (sql.isEmpty()) {
return "";
}
String template = "Hibernate: %s %s {elapsed: %sms}";
String batch = "batch".equals(category) ? ((elapsed == 0) ? "add batch" : "execute batch") : "";
return String.format(template, batch, formatter.format(sql), elapsed);
}
}
In p6Spy 3.9 this can be achieved quite simply. In spy.properties set
customLogMessageFormat=%(effectiveSql)
You can patch com.p6spy.engine.spy.appender.SingleLineFormat.java
removing the prepared element and any reference to P6Util like so:
package com.p6spy.engine.spy.appender;
public class SingleLineFormat implements MessageFormattingStrategy {
#Override
public String formatMessage(final int connectionId, final String now, final long elapsed, final String category, final String prepared, final String sql) {
return now + "|" + elapsed + "|" + category + "|connection " + connectionId + "|" + sql;
}
}
Then compile just the file
javac com.p6spy.engine.spy.appender.SingleLineFormat.java
And replace the existing class file in p6spy.jar with the new one.
Related
I am using jdbcTemplate to query hive then writing the results to a .csv file. I basically just generate a list of objects then steam the list to write each record to the file.
I will like to stream the results as they coming back from hive and write it to the file instead of wait to get the whole thing then processing it. Can anyone pointing me to the right direction? Thanks!
private List<Avs> queryAvsData(String asSql) {
List<Avs> llistAvs = new ArrayList<Avs>();
List<Map<String, Object>> rows = hiveJdbcTemplate.queryForList(asSql);
Iterator<Map<String, Object>> it = rows.iterator();
while (it.hasNext()) {
Map<String, Object> row = it.next();
Avs laAvs = Avs.builder()
.make((String) row.get("make"))
.model((String) row.get("model"))
.build();
llistAvs.add(laAvs);
}
return llistAvs;
}
It doesn't look like there's a built-in solution, but you can do it. Basically, you wrap the existing functionality in an iterator, and use a spliterator to turn it into a stream. Here's a blog post on the subject:
The code implements Spring’s ResultSetExtractor interface, which is a Single Abstract Method (SAM) interface, allowing the use of a lambda expression to implement it.
The implementation wraps the SQL ResultSet in an iterator, constructs a stream using the Spliterators and StreamSupport utility classes, and applies that to a Function taking a stream of row sets and returning a generic result.
It's possible to stream values from JdbcTemplate. The following example is a service based on Spring Boot 2.4.8.
As, I run into problems (connection leak) using queryForStream then I will put a demo code here just to know that stream must be closed after usage.
import lombok.RequiredArgsConstructor;
import org.springframework.jdbc.core.SingleColumnRowMapper;
import org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate;
import org.springframework.stereotype.Service;
import java.util.Map;
import java.util.stream.Stream;
#Service
#RequiredArgsConstructor
public class DataCleaningService {
private final NamedParameterJdbcTemplate jdbcTemplate;
public void doSomeStreaming() {
String nativeQuery = "SELECT string_value FROM my_table WHERE column = :valueToFiler";
Map<String, Object> queryParameters = Map.of("valueToFiler", "my value");
SingleColumnRowMapper<String> stringRowMapper = SingleColumnRowMapper.newInstance(String.class);
try (Stream<String> stringValueStream = jdbcTemplate.queryForStream(nativeQuery, queryParameters, stringRowMapper)) {
stringValueStream.forEach(stringValue -> {
// do the needed action with the value
//..
System.out.printf("My cool value: %s", stringValue);
});
}
}
}
So I have an application with a ton of migrations made by Entity framework.
We want to get a script for all the migrations at once and using the -Script tag does work fine.
However...it does not add GO statements in the SQL giving us problems like Alter view should be the first statement in a batch file...
I have been searching around and manually adding Sql("GO"); help with this problem but only for the entire script. When I use the package console manager again it returns an exception.
System.Data.SqlClient.SqlException (0x80131904): Could not find stored procedure 'GO'.
Is there a way to add these GO tags only when using the -Script tag?
If not, what is a good approach for this?
Note: we have also tried having multiple files but since we have so many migrations, this is near impossible to maintain every time.
If you are trying to alter your view using Sql("Alter View dbo.Foos As etc"), then you can avoid the should be the first statement in a batch file error without adding GO statements by putting the sql inside an EXEC command:
Sql("EXEC('Alter View dbo.Foos As etc')")
In order to change the SQL Generated by entity framework migrations you can create a new SqlServerMigrationSqlGenerator
We have done this to add a GO statement before and after the migration history:
public class MigrationScriptBuilder: SqlServerMigrationSqlGenerator
{
protected override void Generate(System.Data.Entity.Migrations.Model.InsertHistoryOperation insertHistoryOperation)
{
Statement("GO");
base.Generate(insertHistoryOperation);
Statement("GO");
}
}
then add in the Configuration constructor (in the Migrations folder of the project where you DbContext is) so that it uses this new sql generator:
[...]
internal sealed class Configuration : DbMigrationsConfiguration<PMA.Dal.PmaContext>
{
public Configuration()
{
SetSqlGenerator("System.Data.SqlClient", new MigrationScriptBuilder());
AutomaticMigrationsEnabled = false;
}
[...]
So now when you generate a script using the -Script tag, you can see that the insert into [__MigrationHistory] is surrounded by GO
Alternatively in your implementation of SqlServerMigrationSqlGenerator you can override any part of the script generation, the InsertHistoryOperation was suitable for us.
Turn out the concept exist deep in the SqlServerMigrationSqlGenerator as an optional argument for Statement(sql, batchTerminator). Here is something based on Skyp idea. It works both in -script mode or not. The GOs are for different operations than for Skyp only because our needs are a little different. You then need to register this class in the Configuration as per Skyp instructions.
public class MigrationScriptBuilder : SqlServerMigrationSqlGenerator
{
private string Marker = Guid.NewGuid().ToString(); //To cheat on the check null or empty of the base generator
protected override void Generate(AlterProcedureOperation alterProcedureOperation)
{
SqlGo();
base.Generate(alterProcedureOperation);
SqlGo();
}
protected override void Generate(CreateProcedureOperation createProcedureOperation)
{
SqlGo();
base.Generate(createProcedureOperation);
SqlGo();
}
protected override void Generate(SqlOperation sqlOperation)
{
SqlGo();
base.Generate(sqlOperation);
}
private void SqlGo()
{
Statement(Marker, batchTerminator: "GO");
}
public override IEnumerable<MigrationStatement> Generate(IEnumerable<MigrationOperation> migrationOperations, string providerManifestToken)
{
var result = new List<MigrationStatement>();
var statements = base.Generate(migrationOperations, providerManifestToken);
bool pendingBatchTerminator = false;
foreach (var item in statements)
{
if(item.Sql == Marker && item.BatchTerminator == "GO")
{
pendingBatchTerminator = true;
}
else
{
if(pendingBatchTerminator)
{
item.BatchTerminator = "GO";
pendingBatchTerminator = false;
}
result.Add(item);
}
}
return result;
}
}
The easiest way is to add /**/ before the GO statement.
Just replace the current statement with a .Replace("GO", "");
Lucene has Analyzers that basically tokenize and filter the corpus when indexing. Operations include converting tokens to lowercase, stemming, removing stopwords, etc.
I'm running an experiment where I want to try all possible combinations of analysis operations: stemming only, stopping only, stemming and stopping, ...
In total, there 36 combinations that I want to try.
How can I do easily and gracefully do this?
I know that I can extend the Analyzer class and implement the tokenStream() function to create my own Analyzer:
public class MyAnalyzer extends Analyzer
{
public TokenStream tokenStream(String field, final Reader reader){
return new NameFilter(
CaseNumberFilter(
new StopFilter(
new LowerCaseFilter(
new StandardFilter(
new StandardTokenizer(reader)
)
), StopAnalyzer.ENGLISH_STOP_WORDS)
)
);
}
What I'd like to do is write one such class, which can somehow take boolean values for each of the possible operations (doStopping, doStemming, etc.). I don't want to have to write 36 different Analyzer classes that each perform one of the 36 combinations. What makes it difficult is the way the filters are all combined together in their constructors.
Any ideas on how to do this gracefully?
EDIT: By "gracefully", I mean that I can easily create a new Analyzer in some sort of loop:
analyzer = new MyAnalyzer(doStemming, doStopping, ...)
where doStemming and doStopping change with each loop iteration.
Solr solves this problem by using Tokenizer and TokenFilter factories. You could do the same, for example:
public interface TokenizerFactory {
Tokenizer newTokenizer(Reader reader);
}
public interface TokenFilterFactory {
TokenFilter newTokenFilter(TokenStream source);
}
public class ConfigurableAnalyzer {
private final TokenizerFactory tokenizerFactory;
private final List<TokenFilterFactory> tokenFilterFactories;
public ConfigurableAnalyzer(TokenizerFactory tokenizerFactory, TokenFilterFactory... tokenFilterFactories) {
this.tokenizerFactory = tokenizerFactory;
this.tokenFilterFactories = Arrays.asList(tokenFilterFactories);
}
public TokenStream tokenStream(String field, Reader source) {
TokenStream sink = tokenizerFactory.newTokenizer(source);
for (TokenFilterFactory tokenFilterFactory : tokenFilterFactories) {
sink = tokenFilterFactory.newTokenFilter(sink);
}
return sink;
}
}
This way, you can configure your analyzer by passing a factory for one tokenizer and 0 to n filters as constructor arguments.
Add some class variables to the custom Analyzer class which can be easily set and unset on the fly. Then, in the tokenStream() function, use those variables to determine which filters to perform.
public class MyAnalyzer extends Analyzer {
private Set customStopSet;
public static final String[] STOP_WORDS = ...;
private boolean doStemming = false;
private boolean doStopping = false;
public JavaSourceCodeAnalyzer(){
super();
customStopSet = StopFilter.makeStopSet(STOP_WORDS);
}
public void setDoStemming(boolean val){
this.doStemming = val;
}
public void setDoStopping(boolean val){
this.doStopping = val;
}
public TokenStream tokenStream(String fieldName, Reader reader) {
// First, convert to lower case
TokenStream out = new LowerCaseTokenizer(reader);
if (this.doStopping){
out = new StopFilter(true, out, customStopSet);
}
if (this.doStemming){
out = new PorterStemFilter(out);
}
return out;
}
}
There is one gotcha: LowerCaseTokenizer takes as input the reader variable, and returns a TokenStream. This is fine for the following filters (StopFilter, PorterStemFilter), because they take TokenStreams as input and return them as output, and so we can chain them together nicely. However, this means you can't have a filter before the LowerCaseTokenizer that returns a TokenStream. In my case, I wanted to split camelCase words into parts, and this has to be done before converting to lower case. My solution was to perform the splitting manually in the custom Indexer class, so by the time MyAnalyzer sees the text, it has already been split.
(I have also added a boolean flag to my customer Indexer class, so now both can work based solely on flags.)
Is there a better answer?
I'm looking to write a test for a function that just returns a value - that's it. I'm not sure how you could do that. I'm under the impression you have to use system.assert or something. New to SFDC, but have programmed in many other languages. Here's some sample code:
static String getBrowserName()
{
String userAgent = ApexPages.currentPage().getHeaders().get('User-Agent');
if (userAgent.contains('iPhone'))
return 'iPhone-Safari';
if (userAgent.contains('Salesforce'))
return 'Salesforce';
if (userAgent.contains('BlackBerry'))
return 'BlackBerry';
if (userAgent.contains('Firefox'))
return 'Firefox';
if (userAgent.contains('Safari'))
return 'Safari';
if (userAgent.contains('internet explorer'))
return 'ie';
return 'other';
}
How can you obtain 100% test coverage for that?
While Salesforce's lack of a mocking framework is infuriating because of the hoops you have to jump through when testing things like page controllers, it's important to think about what you want to test here. Assuming that what you specifically want to test is that given the user agent strings your code returns the appropriate string, then I think something like the following should work:
static String getBrowserName(string userAgentStringToTest)
{
PageReference pageRef = getPageReference(userAgentStringToTest);
String userAgent = getUserAgent(pageRef);
...
}
PageReference getPageReference(string userAgentStringToTest)
{
if(userAgentStringToTest.Length == 0)
{
return ApexPages.currentPage();
}
else
{
PageReference pageRef = new PageReference('someURL');
pageRef.getHeaders().put('User-Agent', userAgentStringToTest);
return pageRef;
}
}
String getUserAgent(PageReference pageRef)
{
pageRef.getHeaders().get('User-Agent');
}
You would then call the getBrowserName method with the empty string in your production code and with the string you want to test in your test code.
There are a few different flavours to this of course - you could overload the methods and have a parameterless method for the main code and a parameterized method for testing. It's not ideal, but I don't know of another way to do it on the force.com platform currently.
EDIT: Just for completeness, I'm adding sample tests to clarify things. My example showed how to refactor the production code to make it testable, but did not give an example of how to write a test like the OP asked for.
Your tests would look something like this:
static testMethod void checkIPhoneBrowser()
{
String actualBrowserName = getBrowserName('string containing iPhone somewhere');
String expectedBrowserName = 'iPhone-Safari';
System.assertEquals(expectedBrowserName , actualBrowserName );
}
static testMethod void checkIEBrowser()
{
String actualBrowserName = getBrowserName('string containing internet explorer somewhere');
String expectedBrowserName = 'ie';
System.assertEquals(expectedBrowserName , actualBrowserName );
}
...
I am using Apache's Velocity templating engine, and I would like to create a custom Directive. That is, I want to be able to write "#doMyThing()" and have it invoke some java code I wrote in order to generate the text.
I know that I can register a custom directive by adding a line
userdirective=my.package.here.MyDirectiveName
to my velocity.properties file. And I know that I can write such a class by extending the Directive class. What I don't know is how to extend the Directive class -- some sort of documentation for the author of a new Directive. For instance I'd like to know if my getType() method return "BLOCK" or "LINE" and I'd like to know what should my setLocation() method should do?
Is there any documentation out there that is better than just "Use the source, Luke"?
On the Velocity wiki, there's a presentation and sample code from a talk I gave called "Hacking Velocity". It includes an example of a custom directive.
Also was trying to come up with a custom directive. Couldn't find any documentation at all, so I looked at some user created directives: IfNullDirective (nice and easy one), MergeDirective as well as velocity build-in directives.
Here is my simple block directive that returns compressed content (complete project with some directive installation instructions is located here):
import java.io.IOException;
import java.io.StringWriter;
import java.io.Writer;
import org.apache.velocity.context.InternalContextAdapter;
import org.apache.velocity.exception.MethodInvocationException;
import org.apache.velocity.exception.ParseErrorException;
import org.apache.velocity.exception.ResourceNotFoundException;
import org.apache.velocity.exception.TemplateInitException;
import org.apache.velocity.runtime.RuntimeServices;
import org.apache.velocity.runtime.directive.Directive;
import org.apache.velocity.runtime.parser.node.Node;
import org.apache.velocity.runtime.log.Log;
import com.googlecode.htmlcompressor.compressor.HtmlCompressor;
/**
* Velocity directive that compresses an HTML content within #compressHtml ... #end block.
*/
public class HtmlCompressorDirective extends Directive {
private static final HtmlCompressor htmlCompressor = new HtmlCompressor();
private Log log;
public String getName() {
return "compressHtml";
}
public int getType() {
return BLOCK;
}
#Override
public void init(RuntimeServices rs, InternalContextAdapter context, Node node) throws TemplateInitException {
super.init(rs, context, node);
log = rs.getLog();
//set compressor properties
htmlCompressor.setEnabled(rs.getBoolean("userdirective.compressHtml.enabled", true));
htmlCompressor.setRemoveComments(rs.getBoolean("userdirective.compressHtml.removeComments", true));
}
public boolean render(InternalContextAdapter context, Writer writer, Node node)
throws IOException, ResourceNotFoundException, ParseErrorException, MethodInvocationException {
//render content to a variable
StringWriter content = new StringWriter();
node.jjtGetChild(0).render(context, content);
//compress
try {
writer.write(htmlCompressor.compress(content.toString()));
} catch (Exception e) {
writer.write(content.toString());
String msg = "Failed to compress content: "+content.toString();
log.error(msg, e);
throw new RuntimeException(msg, e);
}
return true;
}
}
Block directives always accept a body and must end with #end when used in a template. e.g. #foreach( $i in $foo ) this has a body! #end
Line directives do not have a body or an #end. e.g. #parse( 'foo.vtl' )
You don't need to both with setLocation() at all. The parser uses that.
Any other specifics i can help with?
Also, have you considered using a "tool" approach? Even if you don't use VelocityTools to automatically make your tool available and whatnot, you can just create a tool class that does what you want, put it in the context and either have a method you call to generate content or else just have its toString() method generate the content. e.g. $tool.doMyThing() or just $myThing
Directives are best for when you need to mess with Velocity internals (access to InternalContextAdapter or actual Nodes).
Prior to velocity v1.6 I had a #blockset($v)#end directive to be able to deal with a multiline #set($v) but this function is now handled by the #define directive.
Custom block directives are a pain with modern IDEs because they don't parse the structure correctly, assuming your #end associated with #userBlockDirective is an extra and paints the whole file RED. They should be avoided if possible.
I copied something similar from the velocity source code and created a "blockset" (multiline) directive.
import org.apache.velocity.runtime.directive.Directive;
import org.apache.velocity.runtime.RuntimeServices;
import org.apache.velocity.runtime.parser.node.Node;
import org.apache.velocity.context.InternalContextAdapter;
import org.apache.velocity.exception.MethodInvocationException;
import org.apache.velocity.exception.ResourceNotFoundException;
import org.apache.velocity.exception.ParseErrorException;
import org.apache.velocity.exception.TemplateInitException;
import java.io.Writer;
import java.io.IOException;
import java.io.StringWriter;
public class BlockSetDirective extends Directive {
private String blockKey;
/**
* Return name of this directive.
*/
public String getName() {
return "blockset";
}
/**
* Return type of this directive.
*/
public int getType() {
return BLOCK;
}
/**
* simple init - get the blockKey
*/
public void init( RuntimeServices rs, InternalContextAdapter context,
Node node )
throws TemplateInitException {
super.init( rs, context, node );
/*
* first token is the name of the block. I don't even check the format,
* just assume it looks like this: $block_name. Should check if it has
* a '$' or not like macros.
*/
blockKey = node.jjtGetChild( 0 ).getFirstToken().image.substring( 1 );
}
/**
* Renders node to internal string writer and stores in the context at the
* specified context variable
*/
public boolean render( InternalContextAdapter context, Writer writer,
Node node )
throws IOException, MethodInvocationException,
ResourceNotFoundException, ParseErrorException {
StringWriter sw = new StringWriter(256);
boolean b = node.jjtGetChild( 1 ).render( context, sw );
context.put( blockKey, sw.toString() );
return b;
}
}