auto pretty formatting in xtext - eclipse-plugin

I want to ask that is there a way to do Pretty formatting in xtext automatically without (ctrl+shift+f) or turning it on from preference menu. What I actually want is whenever a user completes writing the code it is automatically pretty formatted (or on runtime) without (ctrl+shift+f).

There is a way for doing that which is called "AutoEdit". It's not exactly when the user completes writing but it's with every token. That's at least what I have done. You can for sure change that. I will give you an example that I implemented myself for my project. It basically capitalizes everykeyword as the user types (triggered by spaces and endlines).
It is a UI thing. So, In your UI project:
in MyDslUiModule.java you need to attach your AutoEdit custom made class do that like this:
public Class<? extends DefaultAutoEditStrategyProvider> bindDefaultAutoEditStrategyProvider()
{
return MyDslAutoEditStrategyProvider.class;
}
Our class will be called MyDslAutoEditStrategyProvider so, go ahead and create it in a MyDslAutoEditStrategyProvider.java file. Mine had this to do what i explained in the introduction:
import java.util.Set;
import org.eclipse.jface.text.DocumentCommand;
import org.eclipse.jface.text.IAutoEditStrategy;
import org.eclipse.jface.text.IDocument;
import org.eclipse.jface.text.IRegion;
import org.eclipse.xtext.GrammarUtil;
import org.eclipse.xtext.IGrammarAccess;
import org.eclipse.xtext.ui.editor.autoedit.DefaultAutoEditStrategyProvider;
import org.eclipse.xtext.ui.editor.model.XtextDocument;
import com.google.inject.Inject;
import com.google.inject.Provider;
public class MyDslAutoEditStrategyProvider extends DefaultAutoEditStrategyProvider {
#Inject
Provider<IGrammarAccess> iGrammar;
private Set<String> KWDS;
#Override
protected void configure(IEditStrategyAcceptor acceptor) {
KWDS = GrammarUtil.getAllKeywords(iGrammar.get().getGrammar());
IAutoEditStrategy strategy = new IAutoEditStrategy()
{
#Override
public void customizeDocumentCommand(IDocument document, DocumentCommand command)
{
if ( command.text.length() == 0 || command.text.charAt(0) > ' ') return;
IRegion reg = ((XtextDocument) document).getLastDamage();
try {
String token = document.get(reg.getOffset(), reg.getLength());
String possibleKWD = token.toLowerCase();
if ( token.equals(possibleKWD.toUpperCase()) || !KWDS.contains(possibleKWD) ) return;
document.replace(reg.getOffset(), reg.getLength(), possibleKWD.toUpperCase());
}
catch (Exception e)
{
System.out.println("AutoEdit error.\n" + e.getMessage());
}
}
};
acceptor.accept(strategy, IDocument.DEFAULT_CONTENT_TYPE);
super.configure(acceptor);
}
}
I assume you saying "user completes writing" can be as "user saves file". If so here is how to trigger the formatter on save:
in MyDslUiModule.java:
public Class<? extends XtextDocumentProvider> bindXtextDocumentProvider()
{
return MyDslDocumentProvider.class;
}
Create the MyDslDocumentProvider class:
import org.eclipse.core.runtime.CoreException;
import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.jface.text.IDocument;
import org.eclipse.ui.PlatformUI;
import org.eclipse.ui.handlers.IHandlerService;
import org.eclipse.xtext.ui.editor.model.XtextDocumentProvider;
public class MyDslDocumentProvider extends XtextDocumentProvider
{
#Override
protected void doSaveDocument(IProgressMonitor monitor, Object element, IDocument document, boolean overwrite)
throws CoreException {
IHandlerService service = (IHandlerService) PlatformUI.getWorkbench().getService(IHandlerService.class);
try {
service.executeCommand("org.eclipse.xtext.ui.FormatAction", null);
} catch (Exception e)
{
e.printStackTrace();
}
super.doSaveDocument(monitor, element, document, overwrite);
}
}

Related

Accessing TableRow columns in BigQuery Apache Beam

I am trying to
1.Read JSON events from Cloud Pub/Sub
2.Load the events from Cloud Pub/Sub to BigQuery every 15 minutes using file loads to save cost on streaming inserts.
3.The destination will differ based on "user_id" and "campaign_id" field in the JSON event, "user_id" will be dataset name and "campaign_id" will be the table name. The partition name comes from the event timestamp.
4.The schema for all tables stays same.
I am new to Java and Beam. I think my code mostly does what I am trying to do and I just a need little help here.
But I unable to access "campaign_id" and "user_id" field in the JSON message.
So, my events are not routing to the correct table.
package ...;
import com.google.api.services.bigquery.model.TableSchema;
import javafx.scene.control.TableRow;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.Coder;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.DynamicDestinations;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.TableRowJsonCoder;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.PCollection;
import org.apache.beam.sdk.values.ValueInSingleWindow;
import org.joda.time.Duration;
import org.joda.time.Instant;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import java.text.SimpleDateFormat;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.Method.FILE_LOADS;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_APPEND;
public class ClickLogConsumer {
private static final int BATCH_INTERVAL_SECS = 15 * 60;
private static final String PROJECT = "pure-app";
public static PTransform<PCollection<String>, PCollection<com.google.api.services.bigquery.model.TableRow>> jsonToTableRow() {
return new JsonToTableRow();
}
private static class JsonToTableRow
extends PTransform<PCollection<String>, PCollection<com.google.api.services.bigquery.model.TableRow>> {
#Override
public PCollection<com.google.api.services.bigquery.model.TableRow> expand(PCollection<String> stringPCollection) {
return stringPCollection.apply("JsonToTableRow", MapElements.<String, com.google.api.services.bigquery.model.TableRow>via(
new SimpleFunction<String, com.google.api.services.bigquery.model.TableRow>() {
#Override
public com.google.api.services.bigquery.model.TableRow apply(String json) {
try {
InputStream inputStream = new ByteArrayInputStream(
json.getBytes(StandardCharsets.UTF_8.name()));
//OUTER is used here to prevent EOF exception
return TableRowJsonCoder.of().decode(inputStream, Coder.Context.OUTER);
} catch (IOException e) {
throw new RuntimeException("Unable to parse input", e);
}
}
}));
}
}
public static void main(String[] args) throws Exception {
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply(PubsubIO.readStrings().withTimestampAttribute("timestamp").fromTopic("projects/pureapp-199410/topics/clicks"))
.apply(jsonToTableRow())
.apply("WriteToBQ",
BigQueryIO.writeTableRows()
.withMethod(FILE_LOADS)
.withWriteDisposition(WRITE_APPEND)
.withCreateDisposition(CREATE_IF_NEEDED)
.withTriggeringFrequency(Duration.standardSeconds(BATCH_INTERVAL_SECS))
.withoutValidation()
.to(new DynamicDestinations<TableRow, String>() {
#Override
public String getDestination(ValueInSingleWindow<TableRow> element) {
String tableName = "campaign_id"; // JSON message in Pub/Sub has "campaign_id" field, how do I access it here?
String datasetName = "user_id"; // JSON message in Pub/Sub has "user_id" field, how do I access it here?
Instant eventTimestamp = element.getTimestamp();
String partition = new SimpleDateFormat("yyyyMMdd").format(eventTimestamp);
return String.format("%s:%s.%s$%s", PROJECT, datasetName, tableName, partition);
}
#Override
public TableDestination getTable(String table) {
return new TableDestination(table, null);
}
#Override
public TableSchema getSchema(String destination) {
return getTableSchema();
}
}));
pipeline.run();
}
}
I arrived at the above code based on reading:
1.https://medium.com/myheritage-engineering/kafka-to-bigquery-load-a-guide-for-streaming-billions-of-daily-events-cbbf31f4b737
2.https://shinesolutions.com/2017/12/05/fun-with-serializable-functions-and-dynamic-destinations-in-cloud-dataflow/
3.https://beam.apache.org/documentation/sdks/javadoc/2.0.0/org/apache/beam/sdk/io/gcp/bigquery/DynamicDestinations.html
4.BigQueryIO - Write performance with streaming and FILE_LOADS
5.Inserting into BigQuery via load jobs (not streaming)
Update
import com.google.api.services.bigquery.model.TableFieldSchema;
import com.google.api.services.bigquery.model.TableRow;
import com.google.api.services.bigquery.model.TableSchema;
import com.google.api.services.bigquery.model.TimePartitioning;
import com.google.common.collect.ImmutableList;
import org.apache.beam.sdk.Pipeline;
import org.apache.beam.sdk.coders.Coder;
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.io.gcp.bigquery.TableDestination;
import org.apache.beam.sdk.io.gcp.bigquery.TableRowJsonCoder;
import org.apache.beam.sdk.io.gcp.pubsub.PubsubIO;
import org.apache.beam.sdk.transforms.MapElements;
import org.apache.beam.sdk.transforms.PTransform;
import org.apache.beam.sdk.transforms.SimpleFunction;
import org.apache.beam.sdk.values.PCollection;
import org.joda.time.Duration;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.nio.charset.StandardCharsets;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.Method.FILE_LOADS;
import static org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO.Write.WriteDisposition.WRITE_APPEND;
public class ClickLogConsumer {
private static final int BATCH_INTERVAL_SECS = 15 * 60;
private static final String PROJECT = "pure-app";
public static PTransform<PCollection<String>, PCollection<TableRow>> jsonToTableRow() {
return new JsonToTableRow();
}
private static class JsonToTableRow
extends PTransform<PCollection<String>, PCollection<TableRow>> {
#Override
public PCollection<TableRow> expand(PCollection<String> stringPCollection) {
return stringPCollection.apply("JsonToTableRow", MapElements.<String, com.google.api.services.bigquery.model.TableRow>via(
new SimpleFunction<String, TableRow>() {
#Override
public TableRow apply(String json) {
try {
InputStream inputStream = new ByteArrayInputStream(
json.getBytes(StandardCharsets.UTF_8.name()));
//OUTER is used here to prevent EOF exception
return TableRowJsonCoder.of().decode(inputStream, Coder.Context.OUTER);
} catch (IOException e) {
throw new RuntimeException("Unable to parse input", e);
}
}
}));
}
}
public static void main(String[] args) throws Exception {
Pipeline pipeline = Pipeline.create(options);
pipeline
.apply(PubsubIO.readStrings().withTimestampAttribute("timestamp").fromTopic("projects/pureapp-199410/topics/clicks"))
.apply(jsonToTableRow())
.apply(BigQueryIO.write()
.withTriggeringFrequency(Duration.standardSeconds(BATCH_INTERVAL_SECS))
.withMethod(FILE_LOADS)
.withWriteDisposition(WRITE_APPEND)
.withCreateDisposition(CREATE_IF_NEEDED)
.withSchema(new TableSchema().setFields(
ImmutableList.of(
new TableFieldSchema().setName("timestamp").setType("TIMESTAMP"),
new TableFieldSchema().setName("exchange").setType("STRING"))))
.to((row) -> {
String datasetName = row.getValue().get("user_id").toString();
String tableName = row.getValue().get("campaign_id").toString();
return new TableDestination(String.format("%s:%s.%s", PROJECT, datasetName, tableName), "Some destination");
})
.withTimePartitioning(new TimePartitioning().setField("timestamp")));
pipeline.run();
}
}
How about: String tableName = element.getValue().get("campaign_id").toString() and likewise for the dataset.
Besides, for inserting into time-partitioned tables, I strongly recommend using BigQuery's Column-Based Partitioning, instead of using a partition decorator in the table name. Please see "Loading historical data into time-partitioned BigQuery tables" in the javadoc - you'll need a timestamp column. (note that the javadoc has a typo: "time" vs "timestamp")

Arquillian Graphene #Location placeholder

I'm learning Arquillian right now I wonder how to create page that has a placeholder inside the path. For example:
#Location("/posts/{id}")
public class BlogPostPage {
public String getContent() {
// ...
}
}
or
#Location("/posts/{name}")
#Location("/specific-page?requiredParam={value}")
I have looking for an answer on graphine and arquillian reference guides without success. I used library from other language that have support for page-objects, but it has build-in support for placeholders.
AFAIK there is nothing like this implemented in Graphene.
To be honest, I'm not sure how this should behave - how would you pass the values...?
Apart from that, I think that it could be also limited by Java annotation abilities https://stackoverflow.com/a/10636320/6835063
This is not possible currently in Graphene. I've created ARQGRA-500.
It's possible to extend Graphene to add dynamic parameters now. Here's how. (Arquillian 1.1.10.Final, Graphene 2.1.0.Final.)
Create an interface.
import java.util.Map;
public interface LocationParameterProvider {
Map<String, String> provideLocationParameters();
}
Create a custom LocationDecider to replace the corresponding Graphene's one. I replace the HTTP one. This Decider will add location parameters to the URI, if it sees that the test object implements our interface.
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
import java.util.Map;
import java.util.Map.Entry;
import org.jboss.arquillian.core.api.Instance;
import org.jboss.arquillian.core.api.annotation.Inject;
import org.jboss.arquillian.graphene.location.decider.HTTPLocationDecider;
import org.jboss.arquillian.graphene.spi.location.Scheme;
import org.jboss.arquillian.test.spi.context.TestContext;
public class HTTPParameterizedLocationDecider extends HTTPLocationDecider {
#Inject
private Instance<TestContext> testContext;
#Override
public Scheme canDecide() {
return new Scheme.HTTP();
}
#Override
public String decide(String location) {
String uri = super.decide(location);
// not sure, how reliable this method of getting the current test object is
// if it breaks, there is always a possibility of observing
// org.jboss.arquillian.test.spi.event.suite.TestLifecycleEvent's (or rather its
// descendants) and storing the test object in a ThreadLocal
Object testObject = testContext.get().getActiveId();
if (testObject instanceof LocationParameterProvider) {
Map<String, String> locationParameters =
((LocationParameterProvider) testObject).provideLocationParameters();
StringBuilder uriParams = new StringBuilder(64);
boolean first = true;
for (Entry<String, String> param : locationParameters.entrySet()) {
uriParams.append(first ? '?' : '&');
first = false;
try {
uriParams.append(URLEncoder.encode(param.getKey(), "UTF-8"));
uriParams.append('=');
uriParams.append(URLEncoder.encode(param.getValue(), "UTF-8"));
} catch (UnsupportedEncodingException e) {
throw new RuntimeException(e);
}
}
uri += uriParams.toString();
}
return uri;
}
}
Our LocationDecider must be registered to override the Graphene's one.
import org.jboss.arquillian.core.spi.LoadableExtension;
import org.jboss.arquillian.graphene.location.decider.HTTPLocationDecider;
import org.jboss.arquillian.graphene.spi.location.LocationDecider;
public class MyArquillianExtension implements LoadableExtension {
#Override
public void register(ExtensionBuilder builder) {
builder.override(LocationDecider.class, HTTPLocationDecider.class,
HTTPParameterizedLocationDecider.class);
}
}
MyArquillianExtension should be registered via SPI, so create a necessary file in your test resources, e.g. for me the file path is src/test/resources/META-INF/services/org.jboss.arquillian.core.spi.LoadableExtension. The file must contain a fully qualified class name of MyArquillianExtension.
And that's it. Now you can provide location parameters in a test.
import java.util.HashMap;
import java.util.Map;
import org.jboss.arquillian.graphene.page.InitialPage;
import org.jboss.arquillian.graphene.page.Location;
import org.junit.Test;
public class TestyTest implements LocationParameterProvider {
#Override
public Map<String, String> provideLocationParameters() {
Map<String, String> params = new HashMap<>();
params.put("mykey", "myvalue");
return params;
}
#Test
public void test(#InitialPage TestPage page) {
}
#Location("MyTestView.xhtml")
public static class TestPage {
}
}
I've focused on parameters specifically, but hopefully this paves the way for other dynamic path manipulations.
Of course this doesn't fix the Graphene.goTo API. This means before using goTo you have to provide parameters via this roundabout provideLocationParameters way. It's weird. You can make your own alternative API, goTo that accepts parameters, and modify your LocationDecider to support other ParameterProviders.

IWorkbenchPart.openEditor() not opening custom editor

I'm designing an Eclipse plugin designed around a new perspective with an editor that stores code/comment snippets upon highlighting them. The parts to it include: the perspective, the editor, and a mouselistener.
I have the perspective made and can open it. I have the editor class code constructed, however, on programmatically opening the editor via IWorkbenchPart.openEditor() my custom editor does not seem to be initialized in any way. Only the default Eclipse editor appears. I can tell because my custom mouse events do not fire.
I used the vogella tutorial as a reference.
Why is my editor's init() method not being called upon being opened? I can tell it is not since the print statement in both init() and createPartControl() are not executed.
In googling this problem, I found a number of hits but they all revolved around error messages encountered (can't find editor, can't find file, ...). I am getting no error messages, just unexpected behaviour. So those articles were useless.
(I would ideally like a TextViewer instead, since I don't want them editing the contents in this mode anyway, but I decided to start here.)
Code below.
Perspective:
import org.eclipse.ui.IEditorInput;
import org.eclipse.ui.IPageLayout;
import org.eclipse.ui.IPerspectiveFactory;
import org.eclipse.ui.IWorkbenchPage;
import org.eclipse.ui.PartInitException;
import org.eclipse.ui.PlatformUI;
public class PluginPerspective implements IPerspectiveFactory {
#Override
public void createInitialLayout(IPageLayout layout) {
layout.setEditorAreaVisible(true);
layout.setFixed(true);
IWorkbenchPage page = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage();
IEditorInput iei = page.getActiveEditor().getEditorInput();
try
{
// Open Editor code nabbed from Vogella tutorial.
// He creates an action to do so - I force it to happen when the
// perspective is created.
// I get the name of the current open file as expected.
System.out.println(iei.getName());
page.openEditor(iei, myplugin.PluginEditor.ID, true);
// This message prints, as expected.
System.out.println("open!");
} catch (PartInitException e) {
throw new RuntimeException(e);
}
}
}
Editor: (Removed the other basic editor stubs (isDirty, doSave) since they are not pertinent)
import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.jface.text.IDocument;
import org.eclipse.jface.text.TextViewer;
import org.eclipse.swt.events.MouseEvent;
import org.eclipse.swt.events.MouseListener;
import org.eclipse.swt.layout.GridLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.ui.IEditorInput;
import org.eclipse.ui.IEditorPart;
import org.eclipse.ui.IEditorSite;
import org.eclipse.ui.PartInitException;
import org.eclipse.ui.PlatformUI;
import org.eclipse.ui.part.EditorPart;
import org.eclipse.ui.texteditor.ITextEditor;
public class PluginEditor extends EditorPart implements MouseListener {
public static final String ID = "myplugin.plugineditor";
#Override
public void init(IEditorSite site, IEditorInput input)
throws PartInitException {
// TODO Auto-generated method stub
System.out.println("editor init!");
setSite(site);
setInput(input);
}
#Override
public void createPartControl(Composite parent) {
// TODO Auto-generated method stub
System.out.println("editor partcontrol!");
//TextViewer tv = new TextViewer(parent, 0);
//tv.setDocument(getCurrentDocument());
}
#Override
public void mouseDoubleClick(MouseEvent e) {
// TODO Auto-generated method stub
// nothing?
}
#Override
public void mouseDown(MouseEvent e) {
// TODO Auto-generated method stub
// grab start location?
System.out.println("down!");
}
#Override
public void mouseUp(MouseEvent e) {
// TODO Auto-generated method stub
// do stuff!
System.out.println("up!");
}
// to be used for grabbing highlight-selection grabbing later
public IDocument getCurrentDocument() {
final IEditorPart editor = PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage()
.getActiveEditor();
if (!(editor instanceof ITextEditor)) return null;
ITextEditor ite = (ITextEditor)editor;
IDocument doc = ite.getDocumentProvider().getDocument(ite.getEditorInput());
return doc;
//return doc.get();
}
}
Have you registered your editor within your plugin.xml?
<extension
point="org.eclipse.ui.editors">
<editor
default="false"
id="myplugin.plugineditor"
name="name">
</editor>
</extension>
Also, you may want to implement IEditorInput to have specific input for your editor.

How to get path of current selected file in Eclipse plugin development

I am opening an Editor with Open with menu in Eclipse.But i am not able to get path of current selected file.Sometimes it gives proper path but sometimes throws null pointer Exception.
I am writing following code to get selected file path.
IWorkbenchPage iwPage=PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage();
System.err.println("iwpage::"+iwPage);
ISelection selection=iwPage.getSelection();
System.err.println("selection::::testtttt"+selection.toString());
if(selection!=null && selection instanceof IStructuredSelection)
{
IStructuredSelection selectedFileSelection = (IStructuredSelection) selection;
System.out.println(selection.toString());
Object obj = selectedFileSelection.getFirstElement();
selectedFile=(IResource)obj;
System.err.println("selection::::"+selectedFile.getLocation().toString());
String html=selectedFile.getLocation().toString().replace(" ","%20");
String html_file="file:///"+html;
return html_file;
}
I found an answer in the Eclipse forum that seems easier and works for me so far.
IWorkbench workbench = PlatformUI.getWorkbench();
IWorkbenchWindow window =
workbench == null ? null : workbench.getActiveWorkbenchWindow();
IWorkbenchPage activePage =
window == null ? null : window.getActivePage();
IEditorPart editor =
activePage == null ? null : activePage.getActiveEditor();
IEditorInput input =
editor == null ? null : editor.getEditorInput();
IPath path = input instanceof FileEditorInput
? ((FileEditorInput)input).getPath()
: null;
if (path != null)
{
// Do something with path.
}
Some of those classes required new project references, so here's a list of all my imports for that class. Not all of them are related to this snippet, of course.
import org.eclipse.core.runtime.FileLocator;
import org.eclipse.core.runtime.IPath;
import org.eclipse.core.runtime.Path;
import org.eclipse.jface.text.ITextViewer;
import org.eclipse.jface.text.source.CompositeRuler;
import org.eclipse.jface.text.source.LineNumberRulerColumn;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.ui.IEditorInput;
import org.eclipse.ui.IEditorPart;
import org.eclipse.ui.IWorkbench;
import org.eclipse.ui.IWorkbenchPage;
import org.eclipse.ui.IWorkbenchWindow;
import org.eclipse.ui.PlatformUI;
import org.eclipse.ui.part.FileEditorInput;
import org.osgi.framework.Bundle;
You can ask the active editor for the path of the underlying file. Just register an IPartListener to your active IWorkbenchPage and ask withing this listener when activating a part. Here is a snippet
PlatformUI.getWorkbench().getActiveWorkbenchWindow().getActivePage()
.addPartListener(new IPartListener() {
#Override
public void partOpened(IWorkbenchPart part) {
// TODO Auto-generated method stub
}
#Override
public void partDeactivated(IWorkbenchPart part) {
// TODO Auto-generated method stub
}
#Override
public void partClosed(IWorkbenchPart part) {
// TODO Auto-generated method stub
}
#Override
public void partBroughtToTop(IWorkbenchPart part) {
if (part instanceof IEditorPart) {
if (((IEditorPart) part).getEditorInput() instanceof IFileEditorInput) {
IFile file = ((IFileEditorInput) ((EditorPart) part)
.getEditorInput()).getFile();
System.out.println(file.getLocation());
}
}
}
#Override
public void partActivated(IWorkbenchPart part) {
// TODO Auto-generated method stub
}
});

How to create a custom aggregator in Mule?

What is the recommended way to create a completely custom aggregator in mule 3.x? By completely custom, I mean according to my own logic, not using correlation IDs, message counts, etc.
The documentation on the mulesoft site is outdated, saying to use AbstractEventAggregator which does not exist in 3.x:
http://www.mulesoft.org/documentation/display/MULE3USER/Message+Splitting+and+Aggregatio
Digging deeper, it looks like this class has been renamed to AbstractAggregator in 3.x:
http://www.mulesoft.org/docs/site/3.2.0/apidocs/org/mule/routing/AbstractAggregator.html
However, there are no examples that show how to use this. The LoanBroker example described in the first link above actually uses a correlation aggregator (in the 2.x examples, which I assume is what the document is referring to).
At one point, there was an abstract class that had abstract methods shouldAggregate and doAggregate. This is the kind of class I would like to extend.
Look at TestAggregator below for an example of subclassing AbstractAggregator.
import org.mule.DefaultMuleEvent;
import org.mule.DefaultMuleMessage;
import org.mule.api.MuleContext;
import org.mule.api.MuleEvent;
import org.mule.api.store.ObjectStoreException;
import org.mule.api.transformer.TransformerException;
import org.mule.routing.AbstractAggregator;
import org.mule.routing.AggregationException;
import org.mule.routing.EventGroup;
import org.mule.routing.correlation.CollectionCorrelatorCallback;
import org.mule.routing.correlation.EventCorrelatorCallback;
import org.mule.util.concurrent.ThreadNameHelper;
import java.util.Iterator;
public class TestAggregator extends AbstractAggregator
{
#Override
protected EventCorrelatorCallback getCorrelatorCallback(MuleContext muleContext)
{
return new CollectionCorrelatorCallback(muleContext,false,storePrefix)
{
#Override
public MuleEvent aggregateEvents(EventGroup events) throws AggregationException
{
StringBuffer buffer = new StringBuffer(128);
try
{
for (Iterator<MuleEvent> iterator = events.iterator(); iterator.hasNext();)
{
MuleEvent event = iterator.next();
try
{
buffer.append(event.transformMessageToString());
}
catch (TransformerException e)
{
throw new AggregationException(events, null, e);
}
}
}
catch (ObjectStoreException e)
{
throw new AggregationException(events,null,e);
}
logger.debug("event payload is: " + buffer.toString());
return new DefaultMuleEvent(new DefaultMuleMessage(buffer.toString(), muleContext), events.getMessageCollectionEvent());
}
};
}
}