I want to set a ttl for my keys that are stored in Redis, and I have done that in the following way:
#Component
public class RedisBetgeniusMarketService implements BetgeniusMarketService {
private static final int DEFAULT_EVENTS_LIFE_TIME = 240;
#Value("${redis.events.lifetime}")
private long eventsLifeTime = DEFAULT_EVENTS_LIFE_TIME;
#Autowired
private RedisTemplate<String, Market> marketTemplate;
#Override
public Market findOne(Integer fixtureId, Long marketId) {
String key = buildKey(fixtureId, marketId);
return marketTemplate.boundValueOps(key).get();
}
#Override
public void save(Integer fixtureId, Market market) {
String key = buildKey(fixtureId, market.getId());
BoundValueOperations<String, Market> boundValueOperations = marketTemplate.boundValueOps(key);
boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);
boundValueOperations.set(market);
}
private String buildKey(Integer fixtureId, Long marketId) {
return "market:" + fixtureId + ":" + marketId;
}
}
But, when I am printing the ttl of the created key it's equal to -1.
Please, tell me what I am doing wrong.
The template bean is configured in the following way:
#Bean
public RedisTemplate<String, com.egalacoral.spark.betsync.entity.Market> marketTemplate(RedisConnectionFactory connectionFactory) {
final RedisTemplate<String, com.egalacoral.spark.betsync.entity.Market> redisTemplate = new RedisTemplate<>();
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer(com.egalacoral.spark.betsync.entity.Market.class));
redisTemplate.setConnectionFactory(connectionFactory);
return redisTemplate;
}
You need to call expire(…) and set(…) in a different order. The SET command removes any timeout that was previously applied:
From the documentation at http://redis.io/commands/set:
Set key to hold the string value. If key already holds a value, it is overwritten, regardless of its type. Any previous time to live associated with the key is discarded on successful SET operation.
In your case you just need to switch the order of expire(…) and set(…) to set(…) and expire(…).
#Override
public void save(Integer fixtureId, Market market) {
String key = buildKey(fixtureId, market.getId());
BoundValueOperations<String, Market> boundValueOperations = marketTemplate.boundValueOps(key);
boundValueOperations.set(market);
boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);
}
Besides that, you could improve the code by setting the value and expiry in one call. ValueOperations (RedisOperations.opsForValue()) provides a set method that sets the key and timeout with the signature
void set(K key, V value, long timeout, TimeUnit unit);
You can try this also to expire a Key in Redis for a given specific time
redisTemplate.opsForValue().set(key, value, 1, TimeUnit.MINUTES);
I have replace set() and the expire() methods and it's start working.
#Override
public void save(Integer fixtureId, Market market) {
String key = buildKey(fixtureId, market.getId());
BoundValueOperations<String, Market> boundValueOperations
= marketTemplate.boundValueOps(key);
boundValueOperations.set(market);
boundValueOperations.expire(eventsLifeTime, TimeUnit.MINUTES);
}
Related
Using the Jackson Hash Mapper with Flatten=true my Date fields are getting discarded. Is this the correct behaviour or a bug? Is there a way to have Date serialized with Flatten=true?
I've used the following test Pojo:
import java.util.Date;
public class FooClass{
private Boolean foolean;
private Integer barteger;
private String simpleString;
private Date myDate;
public void setFoolean(Boolean value){ foolean = value; }
public Boolean getFoolean(){ return foolean; }
public void setBarteger(Integer value){ barteger = value; }
public Integer getBarteger(){ return barteger; }
public void setSimpleString(String value) { simpleString = value; }
public String getSimpleString(){ return simpleString; }
public void setMyDate(Date value) { myDate = value; }
public Date getMyDate(){ return myDate; }
}
public class Main {
public static void main(String[] args) throws ParseException,
JsonParseException, JsonMappingException, IOException {
Jackson2HashMapper hashMapper = new Jackson2HashMapper(true);
FooClass fooObject = new FooClass();
fooObject.setFoolean(true);
fooObject.setBarteger(10);
fooObject.setSimpleString("Foobar");
fooObject.setMyDate(new Date());
Map<String, Object> hash = hashMapper.toHash(fooObject);
for (String key: hash.keySet())
{
System.out.println("hash contains: " + key + "=" +
hash.get(key.toString()));
}
FooClass newFoo = (FooClass)(hashMapper.fromHash(hash));
System.out.println("FromHash: " + newFoo);
}
}
In this case I get the following output:
hash contains: #class=FooClass
hash contains: foolean=true
hash contains: barteger=10
hash contains: simpleString=Foobar
FromHash: FooClass#117159c0
If I change new Jackson2HashMapper(false); then I get:
hash contains: #class=FooClass
hash contains: foolean=true
hash contains: barteger=10
hash contains: simpleString=Foobar
hash contains: myDate=[java.util.Date, 1547033077869]
FromHash: FooClass#7ed7259e
I was expecting to get the Date field serialized in both cases - perhaps with an additional field describing the date type (flattened).
I traced the reason for this to the following line in the HashMapper code:
typingMapper.enableDefaultTyping(DefaultTyping.NON_FINAL, As.PROPERTY);
Where the mapper is configured.
It seems to issue in Jackson2HashMapper.
After digging into the source of Jackson2HashMapper, it seems to issue in Jackson2HashMapper.
created an issue for this, DATAREDIS-1001
Jackson2HashMapper does not serialize Date/Calender fields when flatten = true
Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm
I have reviewed multiple examples for how to construct a TreeTable from from a Container datasource and just adding items iterating over an Object[][]. Still I'm stuck for my use case.
I have a bean like so...
public class DSRUpdateHourlyDTO implements UniquelyKeyed<AssetOwnedHourlyLocatableId>, Serializable
{
private static final long serialVersionUID = 1L;
private final AssetOwnedHourlyLocatableId id = new AssetOwnedHourlyLocatableId();
private String commitStatus;
private BigDecimal economicMax;
private BigDecimal economicMin;
public void setCommitStatus(String commitStatus) { this.commitStatus = commitStatus; }
public void setEconomicMax(BigDecimal economicMax) { this.economicMax = economicMax; }
public void setEconomicMin(BigDecimal economicMin) { this.economicMin = economicMin; }
public String getCommitStatus() { return commitStatus; }
public BigDecimal getEconomicMax() { return economicMax; }
public BigDecimal getEconomicMin() { return economicMin; }
public AssetOwnedHourlyLocatableId getId() { return id; }
#Override
public AssetOwnedHourlyLocatableId getKey() {
return getId();
}
}
The AssetOwnedHourlyLocatableId is a compound id. It looks like...
public class AssetOwnedHourlyLocatableId implements Serializable, AssetOwned, HasHour, Locatable,
UniquelyKeyed<AssetOwnedHourlyLocatableId> {
private static final long serialVersionUID = 1L;
private String location;
private String hour;
private String assetOwner;
#Override
public String getLocation() {
return location;
}
#Override
public void setLocation(final String location) {
this.location = location;
}
#Override
public String getHour() {
return hour;
}
#Override
public void setHour(final String hour) {
this.hour = hour;
}
#Override
public String getAssetOwner() {
return assetOwner;
}
#Override
public void setAssetOwner(final String assetOwner) {
this.assetOwner = assetOwner;
}
}
I want to generate a grid where the hours are pivoted into column headers and the location is the only other additional column header.
E.g.,
Location 1 2 3 4 5 6 ... 24
would be the column headers.
Underneath each column you might see...
> L1
> Commit Status Status1 .... Status24
> Eco Min EcoMin1 .... EcoMin24
> Eco Max EcoMax1 .... EcoMax24
> L2
> Commit Status Status1 .... Status24
> Eco Min EcoMin1 .... EcoMin24
> Eco Max EcoMax1 .... EcoMax24
So, if I'm provided a List<DSRUpdateHourlyDTO> I want to convert it into the presentation format described above.
What would be the best way to do this?
I have a few additional functional requirements.
I want to be able to toggle between read-only and editable views of the same table.
I want to be able to complete a round-trip to a datasource (e.g., JPAContainerSource).
I (will eventually) want to filter items by any part of the compound id.
My challenge is in the adaptation. I well understand the simple use case where I could take the list and simply splat it into a BeanItemContainer and use addNestedContainerProperty and setVisibleColumns. Pivoting properties into columns seems to be what's stumping me.
As it turns out this was an ill-conceived question.
For data entry purposes, one could use a BeanItemContainer and have the columns include nested container property hour from the composite id and instead of a TreeTable, use a Table that has commitStatus, ecoMin and ecoMax as columns. Limitation: you'd only ever query for / submit one assetOwner and location's worth of data.
As for display, where you don't care to filter one assetOwner and location's worth of data, you could pivot the hour info as originally described. You could just convert the original bean into another bean suitable for display (where each hour is its own column).
This question already has answers here:
Pre- and Post-migration scripts for Flyway
(3 answers)
Closed 7 months ago.
I have a generic cleanup script that I'd like to run after every migration. Is there a good way to have this script run after each migration (short of including the script itself as a change every time I do a migration?)
I see that this question has been asked before here Pre- and Post-migration scripts for Flyway and the answer at that time was no, not really.
Has the answer changed at all in the past 1.5 years?
With flyway 3.0 the situation has changed and there are now callback scripts possible. In this situation an afterMigration.sql file could be used to do the cleanups.
See https://flywaydb.org/documentation/concepts/callbacks and https://flywaydb.org/documentation/tutorials/callbacks for more information.
This has not changed. Use any of the suggested workarounds for now.
I've looked at the suggestions here and Pre- and Post-migration scripts for Flyway and would like to point out a use case that I can't see which workaround (if any) would be most applicable. The use case is to have a dba create a restore point before running developer created migrations.
Right now, with our manual (non-flyway) migration process, a dba creates a restore point before running a set of migrations. The migrations would run fine without the restore point. But if they don't have the correct code (say missing creating a column), it's often preferable to roll back to the oracle restore point, to avoid downtime and give the developer time to work on a fix.
I don't think requiring the developer to include a migration that does that restore point makes sense, because:
1. They might forget (it should automatically happen, without developer intervention)
2. depending on the state of the schema, there may different starting migrations, so if the one that includes the restore point is not run, it may be old, and data may have changed in the interim.
Having a separate migration that does the restore point has similar drawbacks:
1. They would have to manually create a new migration that is essentially a copy of an old migration with a different version number to do the restore point.
For development schemas, with large existing data, it's not practical to clean out the schema while developing a migration, because it predates flyway and may take significant time to recreate from scratch.
For development, ideally the workflow is something like this:
1. create restore point
2. develop migration(s), run using flyway
3. roll back to restore point if migration doesn't work as required.
If there is a way to automate step #1 across the board, it would allow us to use flyway and eliminate the need for a dba, except in the cases where something went wrong and there would be a roll back necessary. There may be a more 'flyway' way to approach the problem, but the workarounds I found don't seem to fit into our existing workflow.
We had the same problem. I.e., call a bunch of scripts always before and after every migration. E.g., deleting and creating materialized view, granting permissions to tables.
These scripts do not change from migration to migration, but they need to be executed.
So I took the org.flywaydb.core.internal.callback.SqlScriptFlywayCallback callback class and adapted it for multiple files.
I tried to stay in the philosophy of flyway and use the following pattern.
Files starting with am__ or AM__ are an after migration script, those with bi__ are for before info, and so on.
I sort the scripts, so that they are executed in the correct order.
public class MultipleScriptPerCallback extends BaseFlywayCallback {
private static final Log LOG = LogFactory.getLog(SqlScriptFlywayCallback.class);
private static final String DELIMITER = "__";
private static final String BEFORE_CLEAN = "bc";
private static final String AFTER_CLEAN = "ac";
private static final String BEFORE_MIGRATE = "bm";
private static final String AFTER_MIGRATE = "am";
private static final String BEFORE_EACH_MIGRATE = "bem";
private static final String AFTER_EACH_MIGRATE = "aem";
private static final String BEFORE_VALIDATE = "bv";
private static final String AFTER_VALIDATE = "av";
private static final String BEFORE_BASELINE = "bb";
private static final String AFTER_BASELINE = "ab";
private static final String BEFORE_REPAIR = "br";
private static final String AFTER_REPAIR = "ar";
private static final String BEFORE_INFO = "bi";
private static final String AFTER_INFO = "ai";
private static final List<String> ALL_CALLBACKS = Arrays.asList(BEFORE_CLEAN, AFTER_CLEAN, BEFORE_MIGRATE, BEFORE_EACH_MIGRATE,
AFTER_EACH_MIGRATE, AFTER_MIGRATE, BEFORE_VALIDATE, AFTER_VALIDATE, BEFORE_BASELINE, AFTER_BASELINE, BEFORE_REPAIR,
AFTER_REPAIR, BEFORE_INFO, AFTER_INFO);
private Map<String, List<SqlScript>> scripts;
#Override
public void setFlywayConfiguration(FlywayConfiguration flywayConfiguration) {
super.setFlywayConfiguration(flywayConfiguration);
if (scripts == null) {
scripts = registerScripts(flywayConfiguration);
}
}
private Map<String, List<SqlScript>> registerScripts(FlywayConfiguration flywayConfiguration) {
Map<String, List<SqlScript>> scripts = new HashMap<>();
for (String callback : ALL_CALLBACKS) {
scripts.put(callback, new ArrayList<SqlScript>());
}
LOG.debug(String.format("%s - Scanning for Multiple SQL callbacks ...", getClass().getSimpleName()));
Locations locations = new Locations(flywayConfiguration.getLocations());
Scanner scanner = new Scanner(flywayConfiguration.getClassLoader());
String sqlMigrationSuffix = flywayConfiguration.getSqlMigrationSuffix();
DbSupport dbSupport = dbSupport(flywayConfiguration);
PlaceholderReplacer placeholderReplacer = createPlaceholderReplacer();
String encoding = flywayConfiguration.getEncoding();
for (Location location : locations.getLocations()) {
Resource[] resources;
try {
resources = scanner.scanForResources(location, "", sqlMigrationSuffix);
} catch (FlywayException e) {
// Ignore missing locations
continue;
}
for (Resource resource : resources) {
String key = extractKeyFromFileName(resource);
if (scripts.keySet().contains(key)) {
LOG.debug(getClass().getSimpleName() + " - found script " + resource.getFilename() + " from location: " + location);
List<SqlScript> sqlScripts = scripts.get(key);
sqlScripts.add(new SqlScript(dbSupport, resource, placeholderReplacer, encoding));
}
}
}
LOG.info(getClass().getSimpleName() + " - scripts registered: " + prettyPrint(scripts));
return scripts;
}
private String prettyPrint(Map<String, List<SqlScript>> scripts) {
StringBuilder prettyPrint = new StringBuilder();
boolean isFirst = true;
for (String key : scripts.keySet()) {
if (!isFirst) {
prettyPrint.append("; ");
}
prettyPrint.append(key).append("=").append("[").append(prettyPrint(scripts.get(key))).append("]");
isFirst = false;
}
return prettyPrint.toString();
}
private String prettyPrint(List<SqlScript> scripts) {
StringBuilder prettyPrint = new StringBuilder();
boolean isFirst = true;
for (SqlScript script : scripts) {
if (!isFirst) {
prettyPrint.append(", ");
}
prettyPrint.append(script.getResource().getFilename());
isFirst = false;
}
return prettyPrint.toString();
}
private String extractKeyFromFileName(Resource resource) {
String filename = resource.getFilename();
eturn filename.substring(0, (!filename.contains(DELIMITER)) ? 0 : filename.indexOf(DELIMITER)).toLowerCase();
}
private DbSupport dbSupport(FlywayConfiguration flywayConfiguration) {
Connection connectionMetaDataTable = JdbcUtils.openConnection(flywayConfiguration.getDataSource());
return DbSupportFactory.createDbSupport(connectionMetaDataTable, true);
}
/**
* #return A new, fully configured, PlaceholderReplacer.
*/
private PlaceholderReplacer createPlaceholderReplacer() {
if (flywayConfiguration.isPlaceholderReplacement()) {
return
new PlaceholderReplacer(flywayConfiguration.getPlaceholders(), flywayConfiguration.getPlaceholderPrefix(),
flywayConfiguration.getPlaceholderSuffix());
}
return PlaceholderReplacer.NO_PLACEHOLDERS;
}
#Override
public void beforeClean(Connection connection) {
execute(BEFORE_CLEAN, connection);
}
#Override
public void afterClean(Connection connection) {
execute(AFTER_CLEAN, connection);
}
#Override
public void beforeMigrate(Connection connection) {
execute(BEFORE_MIGRATE, connection);
}
#Override
public void afterMigrate(Connection connection) {
execute(AFTER_MIGRATE, connection);
}
#Override
public void beforeEachMigrate(Connection connection, MigrationInfo info) {
execute(BEFORE_EACH_MIGRATE, connection);
}
#Override
public void afterEachMigrate(Connection connection, MigrationInfo info) {
execute(AFTER_EACH_MIGRATE, connection);
}
#Override
public void beforeValidate(Connection connection) {
execute(BEFORE_VALIDATE, connection);
}
#Override
public void afterValidate(Connection connection) {
execute(AFTER_VALIDATE, connection);
}
#Override
public void beforeBaseline(Connection connection) {
execute(BEFORE_BASELINE, connection);
}
#Override
public void afterBaseline(Connection connection) {
execute(AFTER_BASELINE, connection);
}
#Override
public void beforeRepair(Connection connection) {
execute(BEFORE_REPAIR, connection);
}
#Override
public void afterRepair(Connection connection) {
execute(AFTER_REPAIR, connection);
}
#Override
public void beforeInfo(Connection connection) {
execute(BEFORE_INFO, connection);
}
#Override
public void afterInfo(Connection connection) {
execute(AFTER_INFO, connection);
}
private void execute(String key, Connection connection) {
List<SqlScript> sqlScripts = scripts.get(key);
LOG.debug(String.format("%s - sqlscript: %s for key: %s", getClass().getSimpleName(), sqlScripts, key));
Collections.sort(sqlScripts, new SqlScriptLexicalComparator());
for (SqlScript script : sqlScripts) {
executeScript(key, connection, script);
}
}
//Not private for testing
void executeScript(String key, Connection connection, SqlScript script) {
LOG.info(String.format("%s - Executing SQL callback: %s : %s", getClass().getSimpleName(), key,
script.getResource().getFilename()));
script.execute(new JdbcTemplate(connection, 0));
}
//Not private for testing
static final class SqlScriptLexicalComparator implements Comparator<SqlScript> {
#Override
public int compare(SqlScript o1, SqlScript o2) {
return Collator.getInstance().compare(o1.getResource().getFilename(), o2.getResource().getFilename());
}
}
}
We are using fluentnhibernate with automapping and we have a naming convention that all columns that are foreign keys, there column name will end with "Key". So we have a convention that looks like this:
public class ForeignKeyColumnNameConvention : IReferenceConvention
{
public void Apply ( IManyToOneInstance instance )
{
// name the key field
string propertyName = instance.Property.Name;
instance.Column ( propertyName + "Key" );
}
}
This works great until we created a component in which one of its values is a foreign key. By renaming the column here it overrides the default name given to the component column which includes the ComponentPrefix which is defined in the AutomappingConfiguration. Is there a way for me to get the ComponentPrefix in this convention? or is there some other way for me to get the column name for components with a property that is a foreign key to end in the word "Key"?
After a lot of fiddling and trial & error (thus being tempted to use your solution with Reflection) I came up with the following:
This method depends on the order of the execution of the conventions. This convention-order happens via a strict hierarchy. In this example, at first, the convention of the component (IDynamicComponentConvention) is being handled and after that the conventions of the inner properties are being handled such as the References mapping (IReferenceConvention).
The strict order is where we make our strike:
We assemble the correct name of the column in the call to Apply(IDynamicComponentConvention instance), put it on the queue. Note that a Queue<T> is used which is a FIFO (first-in-first-out) collection type thus it keeps the order correctly.
Almost immediately after that, Apply(IManyToOneInstanceinstance) is called. We check if there is anything in the queue. If there is, we take it out of the queue and set it as column name. Note that you should not use Peek() instead of Dequeue() as it does not remove the object from the queue.
The code is as follows:
public sealed class CustomNamingConvention : IDynamicComponentConvention, IReferenceConvention {
private static Queue<string> ColumnNames = new Queue<string>();
public void Apply(IDynamicComponentInstance instance) {
foreach (var referenceInspector in instance.References) {
// All the information we need is right here
// But only to inspect, no editing yet :(
// Don't worry, just assemble the name and enqueue it
var name = string.Format("{0}_{1}",
instance.Name,
referenceInspector.Columns.Single().Name);
ColumnNames.Enqueue(name);
}
}
public void Apply(IManyToOneInstance instance) {
if (!ColumnNames.Any())
// Nothing in the queue? Just return then (^_^)
return;
// Set the retrieved string as the column name
var columnName = ColumnNames.Dequeue();
instance.Column(columnName);
// Pick a beer and celebrate the correct naming!
}
}
I Have figured out a way to do this using reflection to get to the underlying mapping of the IManyToOneInspector exposed by the IComponentInstance but was hoping there was a better way to do this?
Here is some example code of how I achieved this:
#region IConvention<IComponentInspector, IComponentInstance> Members
public void Apply(IComponentInstance instance)
{
foreach (var manyToOneInspector in instance.References)
{
var referenceName = string.Format("{0}_{1}_{2}{3}", instance.EntityType.Name, manyToOneInspector.Property.PropertyType.Name, _autoMappingConfiguration.GetComponentColumnPrefix(instance.Property), manyToOneInspector.Property.Name);
if(manyToOneInspector.Property.PropertyType.IsSubclassOf(typeof(LookupBase)))
{
referenceName += "Lkp";
}
manyToOneInspector.Index ( string.Format ( "{0}_FK_IDX", referenceName ) );
}
}
#endregion
public static class ManyToOneInspectorExtensions
{
public static ManyToOneMapping GetMapping(this IManyToOneInspector manyToOneInspector)
{
var fieldInfo = manyToOneInspector.GetType ().GetField( "mapping", BindingFlags.NonPublic | BindingFlags.Instance );
if (fieldInfo != null)
{
var manyToOneMapping = fieldInfo.GetValue( manyToOneInspector ) as ManyToOneMapping;
return manyToOneMapping;
}
return null;
}
public static void Index(this IManyToOneInspector manyToOneInspector, string indexName)
{
var mapping = manyToOneInspector.GetMapping ();
mapping.Index ( indexName );
}
public static void Column(this IManyToOneInspector manyToOneInspector, string columnName)
{
var mapping = manyToOneInspector.GetMapping ();
mapping.Column ( columnName );
}
public static void ForeignKey(this IManyToOneInspector manyToOneInspector, string foreignKeyName)
{
var mapping = manyToOneInspector.GetMapping();
mapping.ForeignKey ( foreignKeyName );
}
}
public static class ManyToOneMappingExtensions
{
public static void Index (this ManyToOneMapping manyToOneMapping, string indexName)
{
if (manyToOneMapping.Columns.First().IsSpecified("Index"))
return;
foreach (var column in manyToOneMapping.Columns)
{
column.Index = indexName;
}
}
public static void Column(this ManyToOneMapping manyToOneMapping, string columnName)
{
if (manyToOneMapping.Columns.UserDefined.Count() > 0)
return;
var originalColumn = manyToOneMapping.Columns.FirstOrDefault();
var column = originalColumn == null ? new ColumnMapping() : originalColumn.Clone();
column.Name = columnName;
manyToOneMapping.ClearColumns();
manyToOneMapping.AddColumn(column);
}
public static void ForeignKey(this ManyToOneMapping manyToOneMapping, string foreignKeyName)
{
if (!manyToOneMapping.IsSpecified("ForeignKey"))
manyToOneMapping.ForeignKey = foreignKeyName;
}
}