This question already has answers here:
Pre- and Post-migration scripts for Flyway
(3 answers)
Closed 7 months ago.
I have a generic cleanup script that I'd like to run after every migration. Is there a good way to have this script run after each migration (short of including the script itself as a change every time I do a migration?)
I see that this question has been asked before here Pre- and Post-migration scripts for Flyway and the answer at that time was no, not really.
Has the answer changed at all in the past 1.5 years?
With flyway 3.0 the situation has changed and there are now callback scripts possible. In this situation an afterMigration.sql file could be used to do the cleanups.
See https://flywaydb.org/documentation/concepts/callbacks and https://flywaydb.org/documentation/tutorials/callbacks for more information.
This has not changed. Use any of the suggested workarounds for now.
I've looked at the suggestions here and Pre- and Post-migration scripts for Flyway and would like to point out a use case that I can't see which workaround (if any) would be most applicable. The use case is to have a dba create a restore point before running developer created migrations.
Right now, with our manual (non-flyway) migration process, a dba creates a restore point before running a set of migrations. The migrations would run fine without the restore point. But if they don't have the correct code (say missing creating a column), it's often preferable to roll back to the oracle restore point, to avoid downtime and give the developer time to work on a fix.
I don't think requiring the developer to include a migration that does that restore point makes sense, because:
1. They might forget (it should automatically happen, without developer intervention)
2. depending on the state of the schema, there may different starting migrations, so if the one that includes the restore point is not run, it may be old, and data may have changed in the interim.
Having a separate migration that does the restore point has similar drawbacks:
1. They would have to manually create a new migration that is essentially a copy of an old migration with a different version number to do the restore point.
For development schemas, with large existing data, it's not practical to clean out the schema while developing a migration, because it predates flyway and may take significant time to recreate from scratch.
For development, ideally the workflow is something like this:
1. create restore point
2. develop migration(s), run using flyway
3. roll back to restore point if migration doesn't work as required.
If there is a way to automate step #1 across the board, it would allow us to use flyway and eliminate the need for a dba, except in the cases where something went wrong and there would be a roll back necessary. There may be a more 'flyway' way to approach the problem, but the workarounds I found don't seem to fit into our existing workflow.
We had the same problem. I.e., call a bunch of scripts always before and after every migration. E.g., deleting and creating materialized view, granting permissions to tables.
These scripts do not change from migration to migration, but they need to be executed.
So I took the org.flywaydb.core.internal.callback.SqlScriptFlywayCallback callback class and adapted it for multiple files.
I tried to stay in the philosophy of flyway and use the following pattern.
Files starting with am__ or AM__ are an after migration script, those with bi__ are for before info, and so on.
I sort the scripts, so that they are executed in the correct order.
public class MultipleScriptPerCallback extends BaseFlywayCallback {
private static final Log LOG = LogFactory.getLog(SqlScriptFlywayCallback.class);
private static final String DELIMITER = "__";
private static final String BEFORE_CLEAN = "bc";
private static final String AFTER_CLEAN = "ac";
private static final String BEFORE_MIGRATE = "bm";
private static final String AFTER_MIGRATE = "am";
private static final String BEFORE_EACH_MIGRATE = "bem";
private static final String AFTER_EACH_MIGRATE = "aem";
private static final String BEFORE_VALIDATE = "bv";
private static final String AFTER_VALIDATE = "av";
private static final String BEFORE_BASELINE = "bb";
private static final String AFTER_BASELINE = "ab";
private static final String BEFORE_REPAIR = "br";
private static final String AFTER_REPAIR = "ar";
private static final String BEFORE_INFO = "bi";
private static final String AFTER_INFO = "ai";
private static final List<String> ALL_CALLBACKS = Arrays.asList(BEFORE_CLEAN, AFTER_CLEAN, BEFORE_MIGRATE, BEFORE_EACH_MIGRATE,
AFTER_EACH_MIGRATE, AFTER_MIGRATE, BEFORE_VALIDATE, AFTER_VALIDATE, BEFORE_BASELINE, AFTER_BASELINE, BEFORE_REPAIR,
AFTER_REPAIR, BEFORE_INFO, AFTER_INFO);
private Map<String, List<SqlScript>> scripts;
#Override
public void setFlywayConfiguration(FlywayConfiguration flywayConfiguration) {
super.setFlywayConfiguration(flywayConfiguration);
if (scripts == null) {
scripts = registerScripts(flywayConfiguration);
}
}
private Map<String, List<SqlScript>> registerScripts(FlywayConfiguration flywayConfiguration) {
Map<String, List<SqlScript>> scripts = new HashMap<>();
for (String callback : ALL_CALLBACKS) {
scripts.put(callback, new ArrayList<SqlScript>());
}
LOG.debug(String.format("%s - Scanning for Multiple SQL callbacks ...", getClass().getSimpleName()));
Locations locations = new Locations(flywayConfiguration.getLocations());
Scanner scanner = new Scanner(flywayConfiguration.getClassLoader());
String sqlMigrationSuffix = flywayConfiguration.getSqlMigrationSuffix();
DbSupport dbSupport = dbSupport(flywayConfiguration);
PlaceholderReplacer placeholderReplacer = createPlaceholderReplacer();
String encoding = flywayConfiguration.getEncoding();
for (Location location : locations.getLocations()) {
Resource[] resources;
try {
resources = scanner.scanForResources(location, "", sqlMigrationSuffix);
} catch (FlywayException e) {
// Ignore missing locations
continue;
}
for (Resource resource : resources) {
String key = extractKeyFromFileName(resource);
if (scripts.keySet().contains(key)) {
LOG.debug(getClass().getSimpleName() + " - found script " + resource.getFilename() + " from location: " + location);
List<SqlScript> sqlScripts = scripts.get(key);
sqlScripts.add(new SqlScript(dbSupport, resource, placeholderReplacer, encoding));
}
}
}
LOG.info(getClass().getSimpleName() + " - scripts registered: " + prettyPrint(scripts));
return scripts;
}
private String prettyPrint(Map<String, List<SqlScript>> scripts) {
StringBuilder prettyPrint = new StringBuilder();
boolean isFirst = true;
for (String key : scripts.keySet()) {
if (!isFirst) {
prettyPrint.append("; ");
}
prettyPrint.append(key).append("=").append("[").append(prettyPrint(scripts.get(key))).append("]");
isFirst = false;
}
return prettyPrint.toString();
}
private String prettyPrint(List<SqlScript> scripts) {
StringBuilder prettyPrint = new StringBuilder();
boolean isFirst = true;
for (SqlScript script : scripts) {
if (!isFirst) {
prettyPrint.append(", ");
}
prettyPrint.append(script.getResource().getFilename());
isFirst = false;
}
return prettyPrint.toString();
}
private String extractKeyFromFileName(Resource resource) {
String filename = resource.getFilename();
eturn filename.substring(0, (!filename.contains(DELIMITER)) ? 0 : filename.indexOf(DELIMITER)).toLowerCase();
}
private DbSupport dbSupport(FlywayConfiguration flywayConfiguration) {
Connection connectionMetaDataTable = JdbcUtils.openConnection(flywayConfiguration.getDataSource());
return DbSupportFactory.createDbSupport(connectionMetaDataTable, true);
}
/**
* #return A new, fully configured, PlaceholderReplacer.
*/
private PlaceholderReplacer createPlaceholderReplacer() {
if (flywayConfiguration.isPlaceholderReplacement()) {
return
new PlaceholderReplacer(flywayConfiguration.getPlaceholders(), flywayConfiguration.getPlaceholderPrefix(),
flywayConfiguration.getPlaceholderSuffix());
}
return PlaceholderReplacer.NO_PLACEHOLDERS;
}
#Override
public void beforeClean(Connection connection) {
execute(BEFORE_CLEAN, connection);
}
#Override
public void afterClean(Connection connection) {
execute(AFTER_CLEAN, connection);
}
#Override
public void beforeMigrate(Connection connection) {
execute(BEFORE_MIGRATE, connection);
}
#Override
public void afterMigrate(Connection connection) {
execute(AFTER_MIGRATE, connection);
}
#Override
public void beforeEachMigrate(Connection connection, MigrationInfo info) {
execute(BEFORE_EACH_MIGRATE, connection);
}
#Override
public void afterEachMigrate(Connection connection, MigrationInfo info) {
execute(AFTER_EACH_MIGRATE, connection);
}
#Override
public void beforeValidate(Connection connection) {
execute(BEFORE_VALIDATE, connection);
}
#Override
public void afterValidate(Connection connection) {
execute(AFTER_VALIDATE, connection);
}
#Override
public void beforeBaseline(Connection connection) {
execute(BEFORE_BASELINE, connection);
}
#Override
public void afterBaseline(Connection connection) {
execute(AFTER_BASELINE, connection);
}
#Override
public void beforeRepair(Connection connection) {
execute(BEFORE_REPAIR, connection);
}
#Override
public void afterRepair(Connection connection) {
execute(AFTER_REPAIR, connection);
}
#Override
public void beforeInfo(Connection connection) {
execute(BEFORE_INFO, connection);
}
#Override
public void afterInfo(Connection connection) {
execute(AFTER_INFO, connection);
}
private void execute(String key, Connection connection) {
List<SqlScript> sqlScripts = scripts.get(key);
LOG.debug(String.format("%s - sqlscript: %s for key: %s", getClass().getSimpleName(), sqlScripts, key));
Collections.sort(sqlScripts, new SqlScriptLexicalComparator());
for (SqlScript script : sqlScripts) {
executeScript(key, connection, script);
}
}
//Not private for testing
void executeScript(String key, Connection connection, SqlScript script) {
LOG.info(String.format("%s - Executing SQL callback: %s : %s", getClass().getSimpleName(), key,
script.getResource().getFilename()));
script.execute(new JdbcTemplate(connection, 0));
}
//Not private for testing
static final class SqlScriptLexicalComparator implements Comparator<SqlScript> {
#Override
public int compare(SqlScript o1, SqlScript o2) {
return Collator.getInstance().compare(o1.getResource().getFilename(), o2.getResource().getFilename());
}
}
}
Related
Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm
I am using Spring JDBC and some nice Java 8 lambda-syntax to execute queries with the JDBCTemplate.
The reason for choosing Springs JDBCTemplate, is the implicit resource-handling that Spring-jdbc offers (I do NOT want a ORM framework for my simple usecase's).
My problem is that I want to debug the whole SQL statements with their parameters. Spring prints the SQL by default but not the parameters. Therefor I have subclassed the JDBCTemplate and overridden a query-method.
An example usage of the JDBCTemplate:
public List<Product> getProductsByModel(String modelName) {
List<Product> productList = jdbcTemplate.query(
"select * from product p, productmodel m " +
"where p.modelId = m.id " +
"and m.name = ?",
(rs, rowNum) -> new Product(
rs.getInt("id"),
rs.getString("stc_number"),
rs.getString("version"),
getModelById(rs.getInt("modelId")), // method not shown
rs.getString("displayName"),
rs.getString("imageUrl")
),
modelName);
return productList;
}
To get hold of the parameters I have, as mentioned, overridden the JDBCTemplate class. By doing a cast and using reflection I get the Object[] field with the parameters from an instance of ArgumentPreparedStatementSetter.
I suspect this implementation could potentially be dangerous, as the actual implementation of the PreparedStatementSetter may not always be ArgumentPreparedStatementSetter (Yes I should do an instanceOf check). Also, the reflection code may not be as elegant, but that is besides the point now though :).
Here's my custom implementation:
public class CustomJdbcTemplate extends JdbcTemplate {
private static final Logger log = LoggerFactory.getLogger(CustomJdbcTemplate.class);
public CustomJdbcTemplate(DataSource dataSource) {
super(dataSource);
}
public <T> T query(PreparedStatementCreator psc, final PreparedStatementSetter pss, final ResultSetExtractor<T> rse)
throws DataAccessException {
if(log.isDebugEnabled()) {
ArgumentPreparedStatementSetter aps = (ArgumentPreparedStatementSetter) pss;
try {
Field args = aps.getClass().getDeclaredField("args");
args.setAccessible(true);
Object[] parameters = (Object[]) args.get(aps);
log.debug("Parameters for SQL query: " + Arrays.toString(parameters));
} catch (NoSuchFieldException | IllegalAccessException e) {
throw new GenericException(e.toString(), e);
}
}
return super.query(psc, pss, rse);
}
}
So, when I execute the log.debug(...) statement I would also like to have the original SQL query logged (same line). Has anyone done something similar or are there any better suggestions as to how this can be achieved?
I do quite a few queries using this CustomJDBCTemplate and all my tests run, so I think it may be an acceptable solution of for most debug purposes.
Kind regards,
Thomas
I found a way to get the SQL-statement, so I will answer my own question :)
The PreparedStatementCreator has the following implementation:
private static class SimplePreparedStatementCreator implements PreparedStatementCreator, SqlProvider
So the SqlProvider has a getSql() method which does exactly what I need.
Posting the "improved" CustomJdbcTemplate class if anyone ever should need to do the same :)
public class CustomJdbcTemplate extends JdbcTemplate {
private static final Logger log = LoggerFactory.getLogger(CustomJdbcTemplate.class);
public CustomJdbcTemplate(DataSource dataSource) {
super(dataSource);
}
public <T> T query(PreparedStatementCreator psc, final PreparedStatementSetter pss, final ResultSetExtractor<T> rse)
throws DataAccessException {
if(log.isDebugEnabled()) {
if(pss instanceof ArgumentPreparedStatementSetter) {
ArgumentPreparedStatementSetter aps = (ArgumentPreparedStatementSetter) pss;
try {
Field args = aps.getClass().getDeclaredField("args");
args.setAccessible(true);
Object[] parameters = (Object[]) args.get(aps);
log.debug("SQL query: [{}]\tParams: {} ", getSql(psc), Arrays.toString(parameters));
} catch (NoSuchFieldException | IllegalAccessException e) {
throw new GenericException(e.toString(), e);
}
}
}
return super.query(psc, pss, rse);
}
private static String getSql(Object sqlProvider) { // this code is also found in the JDBCTemplate class
if (sqlProvider instanceof SqlProvider) {
return ((SqlProvider) sqlProvider).getSql();
}
else {
return null;
}
}
}
The issue is that the 'resourceID' from 'DriveId.getResourceId()' is not available (returns NULL) on newly created files (product of 'DriveFolder.createFile(GAC, meta, cont)'). If the file is retrieved by a regular list or query procedure, the 'resourceID' is correct.
I suspect it is a timing/latency issue, but it is not clear if there is an application action that would force refresh. The 'Drive.DriveApi.requestSync(GAC)' seems to have no effect.
UPDATE (07/22/2015)
Thanks to the prompt response from Steven Bazyl (see comments below), I finally have a satisfactory solution using Completion Events. Here are two minified code snippets that deliver the ResourceId to the app as soon as the newly created file is propagated to the Drive:
File creation, add change subscription:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "_X_";
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null,
new ExecutionOptions.Builder()
.setNotifyOnCompletion(true)
.build()
)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
DriveFile file = Drive.DriveApi.getFile(getGoogleApiClient(), driveId);
file.addChangeSubscription(getGoogleApiClient());
}
}
});
}
}
Event Service, catches the completion:
public class ChngeSvc extends DriveEventService {
private static final String TAG = "_X_";
#Override
public void onCompletion(CompletionEvent event) { super.onCompletion(event);
DriveId driveId = event.getDriveId();
Log.d(TAG, "onComplete: " + driveId.getResourceId());
switch (event.getStatus()) {
case CompletionEvent.STATUS_CONFLICT: Log.d(TAG, "STATUS_CONFLICT"); event.dismiss(); break;
case CompletionEvent.STATUS_FAILURE: Log.d(TAG, "STATUS_FAILURE"); event.dismiss(); break;
case CompletionEvent.STATUS_SUCCESS: Log.d(TAG, "STATUS_SUCCESS "); event.dismiss(); break;
}
}
}
Under normal circumstances (wifi), I get the ResourceId almost immediately.
20:40:53.247﹕Created a empty file: DriveId:CAESABiiAiDGsfO61VMoAA==
20:40:54.305: onComplete, ResourceId: 0BxOS7mTBMR_bMHZRUjJ5NU1ZOWs
... done for now.
ORIGINAL POST, deprecated, left here for reference.
I let this answer sit for a year hoping that GDAA will develop a solution that works. The reason for my nagging is simple. If my app creates a file, it needs to broadcast this fact to its buddies (other devices, for instance) with an ID that is meaningful (that is ResourceId). It is a trivial task under the REST Api where ResourceId comes back as soon as the file is successfully created.
Needles to say that I understand the GDAA philosophy of shielding the app from network primitives, caching, batching, ... But clearly, in this situation, the ResourceID is available long before it is delivered to the app.
Originally, I implemented Cheryl Simon's suggestion and added a ChangeListener on a newly created file, hoping to get the ResourceID when the file is propagated. Using classic CreateEmptyFileActivity from android-demos, I smacked together the following test code:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
DriveId driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
}
}
});
}
}
... and was waiting for something to happen. File was happily uploaded to the Drive within seconds, but no onChange() event. 10 minutes, 20 minutes, ... I could not find any way how to make the ChangeListener to wake up.
So the only other solution, I could come up was to nudge the GDAA. So I implemented a simple handler-poker that tickles the metadata until something happens:
public class CreateEmptyFileActivity extends BaseDemoActivity {
private static final String TAG = "CreateEmptyFileActivity";
final private ChangeListener mChgeLstnr = new ChangeListener() {
#Override
public void onChange(ChangeEvent event) {
Log.d(TAG, "event: " + event + " resId: " + event.getDriveId().getResourceId());
}
};
static DriveId driveId;
private static final int ENOUGH = 4; // nudge 4x, 1+2+3+4 = 10seconds
private static int mWait = 1000;
private int mCnt;
private Handler mPoker;
private final Runnable mPoke = new Runnable() { public void run() {
if (mPoker != null && driveId != null && driveId.getResourceId() == null && (mCnt++ < ENOUGH)) {
MetadataChangeSet meta = new MetadataChangeSet.Builder().build();
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).updateMetadata(getGoogleApiClient(), meta).setResultCallback(
new ResultCallback<DriveResource.MetadataResult>() {
#Override
public void onResult(DriveResource.MetadataResult result) {
if (result.getStatus().isSuccess() && result.getMetadata().getDriveId().getResourceId() != null)
Log.d(TAG, "resId COOL " + result.getMetadata().getDriveId().getResourceId());
else
mPoker.postDelayed(mPoke, mWait *= 2);
}
}
);
} else {
mPoker = null;
}
}};
#Override
public void onConnected(Bundle connectionHint) { super.onConnected(connectionHint);
MetadataChangeSet meta = new MetadataChangeSet.Builder()
.setTitle("EmptyFile.txt").setMimeType("text/plain")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), meta, null)
.setResultCallback(new ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (result.getStatus().isSuccess()) {
driveId = result.getDriveFile().getDriveId();
Log.d(TAG, "Created a empty file: " + driveId);
Drive.DriveApi.getFile(getGoogleApiClient(), driveId).addChangeListener(getGoogleApiClient(), mChgeLstnr);
mCnt = 0;
mPoker = new Handler();
mPoker.postDelayed(mPoke, mWait);
}
}
});
}
}
And voila, 4 seconds (give or take) later, the ChangeListener delivers a new shiny ResourceId. Of course, the ChangeListener becomes thus obsolete, since the poker routine gets the ResourceId as well.
So this is the answer for those who can't wait for the ResourceId. Which brings up the follow-up question:
Why do I have to tickle metadata (or re-commit content), very likely creating unnecessary network traffic, to get onChange() event, when I see clearly that the file has been propagated a long time ago, and GDAA has the ResourceId available?
ResourceIds become available when the newly created resource is committed to the server. In the case of a device that is offline, this could be arbitrarily long after the initial file creation. It will happen as soon as possible after the creation request though, so you don't need to do anything to speed it along.
If you really need it right away, you could conceivably use the change notifications to listen for the resourceId to change.
Currently, openGoogle() does get called for each test case with the correct parameters. The problem is that setBrowser does not appear to be working properly. It does set the first time and completes the test successfully. However, when openGoogle() gets invoked for the second time it continues to use the first browser instead of using the new browser specified.
using NFramework = NUnit.Framework;
...
[NFramework.TestFixture]
public class SampleTest : FluentAutomation.FluentTest
{
string path;
private Action<TinyIoCContainer> currentRegistration;
public TestContext TestContext { get; set; }
[NFramework.SetUp]
public void Init()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
FluentAutomation.Settings.ScreenshotPath = path = "C:\\ScreenShots";
}
[NFramework.Test]
[NFramework.TestCase(SeleniumWebDriver.Browser.Firefox)]
[NFramework.TestCase(SeleniumWebDriver.Browser.InternetExplorer)]
public void openGoogle(SeleniumWebDriver.Browser browser)
{
setBrowser(browser);
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
}
public SampleTest()
{
currentRegistration = FluentAutomation.Settings.Registration;
}
private void setBrowser(SeleniumWebDriver.Browser browser)
{
switch (browser)
{
case SeleniumWebDriver.Browser.InternetExplorer:
case SeleniumWebDriver.Browser.Firefox:
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
break;
}
}
}
Note: Doing it this way below DOES work correctly - opening a separate browser for each test.
public class SampleTest : FluentAutomation.FluentTest {
string path;
private Action currentRegistration;
public TestContext TestContext { get; set; }
private void ie()
{
FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer);
}
private void ff()
{
>FluentAutomation.SeleniumWebDriver.Bootstrap(FluentAutomation.SeleniumWebDriver.Browser.Firefox);
}
public SampleTest()
{
//ff
FluentAutomation.SeleniumWebDriver.Bootstrap();
currentRegistration = FluentAutomation.Settings.Registration;
}
[TestInitialize]
public void Initialize()
{
FluentAutomation.Settings.ScreenshotOnFailedExpect = true;
FluentAutomation.Settings.ScreenshotOnFailedAction = true;
FluentAutomation.Settings.DefaultWaitTimeout = TimeSpan.FromSeconds(1);
FluentAutomation.Settings.DefaultWaitUntilTimeout = TimeSpan.FromSeconds(30);
FluentAutomation.Settings.MinimizeAllWindowsOnTestStart = false;
path = TestContext.TestResultsDirectory;
FluentAutomation.Settings.ScreenshotPath = path;
}
[TestMethod]
public void OpenGoogleIE()
{
ie();
openGoogle("IE");
}
[TestMethod]
public void OpenGoogleFF()
{
ff();
openGoogle("FF");
}
private void openGoogle(string browser)
{
I.Open("http://www.google.com/");
I.WaitUntil(() => I.Expect.Exists("body"));
I.Enter("Unit Testing").In("input[name=q]");
I.TakeScreenshot(browser + "EnterText");
I.Click("button[name=btnG]");
I.WaitUntil(() => I.Expect.Exists(".mw"));
I.TakeScreenshot(browser + "ClickSearch");
} }
Dev branch: The latest bits in the Dev branch play nicely with NUnit's parameterized test cases in my experience.
Just move the Bootstrap call inside the testcase itself and be sure that you manually call I.Dispose() at the end. This allows for proper browser creation when run in this context.
Here is an example that you should be able to copy/paste and run, if you pull latest from GitHub on the dev branch.
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.InternetExplorer)]
[TestCase(FluentAutomation.SeleniumWebDriver.Browser.Chrome)]
public void CartTest(FluentAutomation.SeleniumWebDriver.Browser browser)
{
FluentAutomation.SeleniumWebDriver.Bootstrap(browser);
I.Open("http://automation.apphb.com/forms");
I.Select("Motorcycles").From(".liveExample tr select:eq(0)"); // Select by value/text
I.Select(2).From(".liveExample tr select:eq(1)"); // Select by index
I.Enter(6).In(".liveExample td.quantity input:eq(0)");
I.Expect.Text("$197.70").In(".liveExample tr span:eq(1)");
// add second product
I.Click(".liveExample button:eq(0)");
I.Select(1).From(".liveExample tr select:eq(2)");
I.Select(4).From(".liveExample tr select:eq(3)");
I.Enter(8).In(".liveExample td.quantity input:eq(1)");
I.Expect.Text("$788.64").In(".liveExample tr span:eq(3)");
// validate totals
I.Expect.Text("$986.34").In("p.grandTotal span");
// remove first product
I.Click(".liveExample a:eq(0)");
// validate new total
I.WaitUntil(() => I.Expect.Text("$788.64").In("p.grandTotal span"));
I.Dispose();
}
It should find its way to NuGet in the next release which I'm hoping happens this week.
NuGet v2.0: Currently only one call to Bootstrap is supported per test. In v1 we had built-in support for running the same test against all the browsers supported by a provider but found that users preferred to split it out into multiple tests.
The way I manage it with v2 is to have a 'Base' TestClass that has the TestMethods in it. I then extend that once per browser I want to target, and override the constructor to call the appropriate Bootstrap method.
A bit more verbose but very easy to manage.
Am researching the best way to load external properties files from and EJB 3 app whose EAR file is deployed to WebLogic.
Was thinking about using an init servlet but I read somewhere that it would be too slow (e.g. my message handler might receive a message from my JMS queue before the init servlet runs).
Suppose I have multiple property files or one file here:
~/opt/conf/
So far, I feel that the best possible solution is by using a Web Logic application lifecycle event where the code to read the properties files during pre-start:
import weblogic.application.ApplicationLifecycleListener;
import weblogic.application.ApplicationLifecycleEvent;
public class MyListener extends ApplicationLifecycleListener {
public void preStart(ApplicationLifecycleEvent evt) {
// Load properties files
}
}
See: http://download.oracle.com/docs/cd/E13222_01/wls/docs90/programming/lifecycle.html
What would happen if the server is already running, would post start be a viable solution?
Can anyone think of any alternative ways that are better?
It really depends on how often you want the properties to be reloaded. One approach I have taken is to have a properties file wrapper (singleton) that has a configurable parameter that defines how often the files should be reloaded. I would then always read properties through that wrapper and it would reload the properties ever 15 minutes (similar to Log4J's ConfigureAndWatch). That way, if I wanted to, I can change properties without changing the state of a deployed application.
This also allows you to load properties from a database, instead of a file. That way you can have a level of confidence that properties are consistent across the nodes in a cluster and it reduces complexity associated with managing a config file for each node.
I prefer that over tying it to a lifecycle event. If you weren't ever going to change them, then make them static constants somewhere :)
Here is an example implementation to give you an idea:
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.util.*;
/**
* User: jeffrey.a.west
* Date: Jul 1, 2011
* Time: 8:43:55 AM
*/
public class ReloadingProperties
{
private final String lockObject = "LockMe";
private long lastLoadTime = 0;
private long reloadInterval;
private String filePath;
private Properties properties;
private static final Map<String, ReloadingProperties> instanceMap;
private static final long DEFAULT_RELOAD_INTERVAL = 1000 * 60 * 5;
public static void main(String[] args)
{
ReloadingProperties props = ReloadingProperties.getInstance("myProperties.properties");
System.out.println(props.getProperty("example"));
try
{
Thread.sleep(6000);
}
catch (InterruptedException e)
{
e.printStackTrace();
}
System.out.println(props.getProperty("example"));
}
static
{
instanceMap = new HashMap(31);
}
public static ReloadingProperties getInstance(String filePath)
{
ReloadingProperties instance = instanceMap.get(filePath);
if (instance == null)
{
instance = new ReloadingProperties(filePath, DEFAULT_RELOAD_INTERVAL);
synchronized (instanceMap)
{
instanceMap.put(filePath, instance);
}
}
return instance;
}
private ReloadingProperties(String filePath, long reloadInterval)
{
this.reloadInterval = reloadInterval;
this.filePath = filePath;
}
private void checkRefresh()
{
long currentTime = System.currentTimeMillis();
long sinceLastLoad = currentTime - lastLoadTime;
if (properties == null || sinceLastLoad > reloadInterval)
{
System.out.println("Reloading!");
lastLoadTime = System.currentTimeMillis();
Properties newProperties = new Properties();
FileInputStream fileIn = null;
synchronized (lockObject)
{
try
{
fileIn = new FileInputStream(filePath);
newProperties.load(fileIn);
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
if (fileIn != null)
{
try
{
fileIn.close();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
properties = newProperties;
}
}
}
public String getProperty(String key, String defaultValue)
{
checkRefresh();
return properties.getProperty(key, defaultValue);
}
public String getProperty(String key)
{
checkRefresh();
return properties.getProperty(key);
}
}
Figured it out...
See the corresponding / related post on Stack Overflow.