I've been reading the hive source code recently, but I'm confused by this interrupt(). I want to know how it interrupts the current hive command.The location of this function is in CliDriver.processLine().
In the implementation of HiveInterruptUtils http://people.apache.org/~hashutosh/hive-clover/common/org/apache/hadoop/hive/common/HiveInterruptUtils.html, find this:
public static void interrupt() {
synchronized (interruptCallbacks) {
for (HiveInterruptCallback resource :
new ArrayList<HiveInterruptCallback>(interruptCallbacks)) {
resource.interrupt();
}
}
}
That might interrupts all resources previously added to the HiveInterruptCallback list.
And also the HiveInterruptCallback, http://people.apache.org/~hashutosh/hive-clover/common/org/apache/hadoop/hive/common/HiveInterruptCallback.html#HiveInterruptCallback, is an interface.
public interface HiveInterruptCallback {
/**
* Request interrupting of the processing
*/
void interrupt();
}
The previously registered resources implement HiveInterruptCallback interrupt() method, so the HiveInterruptUtils.interrupt() behavior depends on the specific resource implementation.
Related
I'm making a basic IntelliJ plugin that lets a user define Run Configuration (following the tutorial at [1]), and use said Run Configurations to execute the file open in the editor on a remote server.
My Run Configuration is simple (3 text fields), and I have it all working, however, after editing the Run Configuration, and click "Apply" or "OK" after changing values, the entered values are lost.
What is the correct way to persist and read-back values (both when the Run Configuration is re-opened as well as when the Run Configuration's Runner invoked)? It looks like I could try to create a custom persistence using [2], however, it seems like the Plugin framework should have a way to handle this already or at least hooks for when Apply/OK is pressed.
[1] https://www.jetbrains.org/intellij/sdk/docs/tutorials/run_configurations.html
[2] https://www.jetbrains.org/intellij/sdk/docs/basics/persisting_state_of_components.html
Hopefully, this post is a bit more clear to those new to IntelliJ plugin development and illustrates how persisting/loading Run Configurations can be achieved. Please read through the code comments as this is where much of the explanation takes place.
Also now that SettingsEditorImpl is my custom implementation of the SettingsEditor abstract class, and likewise, RunConfigurationImpl is my custom implementation of the RunConfigiration abstract class.
The first thing to do is to expose the form fields via custom getters on your SettingsEditorImpl (ie. getHost())
public class SettingsEditorImpl extends SettingsEditor<RunConfigurationImpl> {
private JPanel configurationPanel; // This is the outer-most JPanel
private JTextField hostJTextField;
public SettingsEditorImpl() {
super();
}
#NotNull
#Override
protected JComponent createEditor() {
return configurationPanel;
}
/* Gets the Form fields value */
private String getHost() {
return hostJTextField.getText();
}
/* Copy value FROM your custom runConfiguration back INTO the Form UI; This is to load previously saved values into the Form when it's opened. */
#Override
protected void resetEditorFrom(RunConfigurationImpl runConfiguration) {
hostJTextField.setText(StringUtils.defaultIfBlank(runConfiguration.getHost(), RUN_CONFIGURATION_HOST_DEFAULT));
}
/* Sync the value from the Form UI INTO the RunConfiguration which is what the rest of your code will interact with. This requires a way to set this value on your custom RunConfiguration, ie. RunConfigurationImpl##setHost(host) */
#Override
protected void applyEditorTo(RunConfigurationImpl runConfiguration) throws ConfigurationException {
runConfiguration.setHost(getHost());
}
}
So now, the custom SettingsEditor, which backs the Form UI, is set up to Sync field values In and Out of itself. Remember, the custom RunConfiguration is what is going to actually represent this configuration; the SettingsEditor implementation just represents the FORM (a subtle difference, but important).
Now we need a custom RunConfiguration ...
/* Annotate the class with #State and #Storage, which is used to define how this RunConfiguration's data will be persisted/loaded. */
#State(
name = Constants.PLUGIN_NAME,
storages = {#Storage(Constants.PLUGIN_NAME + "__run-configuration.xml")}
)
public class RunConfigurationImpl extends RunConfigurationBase {
// Its good to 'namespace' keys to your component;
public static final String KEY_HOST = Constants.PLUGIN_NAME + ".host";
private String host;
public RunConfigurationImpl(Project project, ConfigurationFactory factory, String name) {
super(project, factory, name);
}
/* Return an instances of the custom SettingsEditor ... see class defined above */
#NotNull
#Override
public SettingsEditor<? extends RunConfiguration> getConfigurationEditor() {
return new SettingsEditorImpl();
}
/* Return null, else we'll get a Startup/Connection tab in our Run Configuration UI in IntelliJ */
#Nullable
#Override
public SettingsEditor<ConfigurationPerRunnerSettings> getRunnerSettingsEditor(ProgramRunner runner) {
return null;
}
/* This is a pretty cool method. Every time SettingsEditor#applyEditorTo() is changed the values in this class, this method is run and can check/validate any fields! If RuntimeConfigurationException is thrown, the exceptions message is shown at the bottom of the Run Configuration UI in IntelliJ! */
#Override
public void checkConfiguration() throws RuntimeConfigurationException {
if (!StringUtils.startsWithAny(getHost(), "http://", "https://")) {
throw new RuntimeConfigurationException("Invalid host");
}
}
#Nullable
#Override
public RunProfileState getState(#NotNull Executor executor, #NotNull ExecutionEnvironment executionEnvironment) throws ExecutionException {
return null;
}
/* This READS any prior persisted configuration from the State/Storage defined by this classes annotations ... see above.
You must manually read and populate the fields using JDOMExternalizerUtil.readField(..).
This method is invoked at the "right time" by the plugin framework. You dont need to call this.
*/
#Override
public void readExternal(Element element) throws InvalidDataException {
super.readExternal(element);
host = JDOMExternalizerUtil.readField(element, KEY_HOST);
}
/* This WRITES/persists configurations TO the State/Storage defined by this classes annotations ... see above.
You must manually read and populate the fields using JDOMExternalizerUtil.writeField(..).
This method is invoked at the "right time" by the plugin framework. You dont need to call this.
*/
#Override
public void writeExternal(Element element) throws WriteExternalException {
super.writeExternal(element);
JDOMExternalizerUtil.writeField(element, KEY_HOST, host);
}
/* This method is what's used by the rest of the plugin code to access the configured 'host' value. The host field (variable) is written by
1. when writeExternal(..) loads a value from a persisted config.
2. when SettingsEditor#applyEditorTo(..) is called when the Form itself changes.
*/
public String getHost() {
return host;
}
/* This method sets the value, and is primarily used by the custom SettingEditor's SettingsEditor#applyEditorTo(..) method call */
public void setHost(String host) {
this.host = host;
}
}
To read these configuration values elsewhere, say for example a custom ProgramRunner, you would do something like:
final RunConfigurationImpl runConfiguration = (RunConfigurationImpl) executionEnvironment.getRunnerAndConfigurationSettings().getConfiguration();
runConfiguration.getHost(); // Returns the configured host value
See com.intellij.execution.configurations.RunConfigurationBase#readExternal as well as com.intellij.execution.configurations.RunConfigurationBase#loadState and com.intellij.execution.configurations.RunConfigurationBase#writeExternal
For a new project i'm building a rest api that references resources from a second service. For the sake of client convenience i want to add this association to be serialized as an _embedded entry.
Is this possible at all? i thought about building a fake CrudRepository (facade for a feign client) and manually change all urls for that fake resource with resource processors. would that work?
a little deep dive into the functionality of spring-data-rest:
Data-Rest wraps all Entities into PersistentEntityResource Objects that extend the Resource<T> interface that spring HATEOAS provides. This particular implementation has a list of embedded objects that will be serialized as the _embedded field.
So in theory the solution to my problem should be as simple as implementing a ResourceProcessor<Resource<MyType>> and add my reference object to the embeds.
In practice this aproach has some ugly but solvable issues:
PersistentEntityResource is not generic, so while you can build a ResourceProcessor for it, that processor will by default catch everything. I am not sure what happens when you start using Projections. So that is not a solution.
PersistentEntityResource implements Resource<Object> and as a result can not be cast to Resource<MyType> and vice versa. If you want to to access the embedded field all casts have to be done with PersistentEntityResource.class.cast() and Resource.class.cast().
Overall my solution is simple, effective and not very pretty. I hope Spring-Hateoas gets full fledged HAL support in the future.
Here my ResourceProcessor as a sample:
#Bean
public ResourceProcessor<Resource<MyType>> typeProcessorToAddReference() {
// DO NOT REPLACE WITH LAMBDA!!!
return new ResourceProcessor<>() {
#Override
public Resource<MyType> process(Resource<MyType> resource) {
try {
// XXX all resources here are PersistentEntityResource instances, but they can't be cast normaly
PersistentEntityResource halResource = PersistentEntityResource.class.cast(resource);
List<EmbeddedWrapper> embedded = Lists.newArrayList(halResource.getEmbeddeds());
ReferenceObject reference = spineClient.findReferenceById(resource.getContent().getReferenceId());
embedded.add(embeddedWrappers.wrap(reference, "reference-relation"));
// XXX all resources here are PersistentEntityResource instances, but they can't be cast normaly
resource = Resource.class.cast(PersistentEntityResource.build(halResource.getContent(), halResource.getPersistentEntity())
.withEmbedded(embedded).withLinks(halResource.getLinks()).build());
} catch (Exception e) {
log.error("Something went wrong", e);
// swallow
}
return resource;
}
};
}
If you would like to work in type safe manner and with links only (addition references to custom controller methods), you can find inspiration in this sample code:
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.hateoas.EntityModel;
import org.springframework.hateoas.server.RepresentationModelProcessor;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.linkTo;
import static org.springframework.hateoas.server.mvc.WebMvcLinkBuilder.methodOn;
#Configuration
public class MyTypeLinkConfiguration {
public static class MyType {}
#Bean
public RepresentationModelProcessor<EntityModel<MyType>> MyTypeProcessorAddLifecycleLinks(MyTypeLifecycleStates myTypeLifecycleStates) {
// WARNING, no lambda can be passed here, because type is crucial for applying this bean processor.
return new RepresentationModelProcessor<EntityModel<MyType>>() {
#Override
public EntityModel<MyType> process(EntityModel<MyType> resource) {
// add custom export link for single MyType
myTypeLifecycleStates
.listReachableStates(resource.getContent().getState())
.forEach(reachableState -> {
try {
// for each possible next state, generate its relation which will get us to given state
switch (reachableState) {
case DRAFT:
resource.add(linkTo(methodOn(MyTypeLifecycleController.class).requestRework(resource.getContent().getId(), null)).withRel("requestRework"));
break;
case IN_REVIEW:
resource.add(linkTo(methodOn(MyTypeLifecycleController.class).requestReview(resource.getContent().getId(), null)).withRel("requestReview"));
break;
default:
throw new RuntimeException("Link for target state " + reachableState + " is not implemented!");
}
} catch (Exception ex) {
// swallowed
log.error("error while adding lifecycle link for target state " + reachableState + "! ex=" + ex.getMessage(), ex);
}
});
return resource;
}
};
}
}
Note, that myTypeLifecycleStates is autowired "service"/"business logic" bean.
I recently added an android native module to my which listens on timezone and time changed broadcasts from the system and allows the app to perform some operations. The native module looks like this
public class TimezoneHandlerModule extends ReactContextBaseJavaModule {
private final Context context;
private final TimezoneChangeBroadcastReceiver timezoneChangeBroadcastReceiver;
private Callback onTimezoneChangeCallback;
public TimezoneHandlerModule(ReactApplicationContext reactContext) {
super(reactContext);
this.context = reactContext;
this.timezoneChangeBroadcastReceiver = new TimezoneChangeBroadcastReceiver();
}
private void registerForTimezoneChangeHandler() {
IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction(Intent.ACTION_TIME_CHANGED);
intentFilter.addAction(Intent.ACTION_TIMEZONE_CHANGED);
getReactApplicationContext().registerReceiver(timezoneChangeBroadcastReceiver, intentFilter);
}
private void unregisterTimezoneChangeHandler() {
getReactApplicationContext().unregisterReceiver(timezoneChangeBroadcastReceiver);
}
public void setOnTimezoneChangeCallback(Callback onTimezoneChangeCallback) {
this.onTimezoneChangeCallback = onTimezoneChangeCallback;
}
/**
* #return the name of this module. This will be the name used to {#code require()} this module
* from javascript.
*/
#Override
public String getName() {
return "TimezoneHandler";
}
#ReactMethod
public void start(Callback onChange) {
Log.d(getName(), "Starting the timezone change handler");
this.registerForTimezoneChangeHandler();
this.setOnTimezoneChangeCallback(onChange);
}
#ReactMethod
public void stop() {
Log.d(getName(), "Stopping the timezone change handler");
this.unregisterTimezoneChangeHandler();
}
private class TimezoneChangeBroadcastReceiver extends BroadcastReceiver {
#Override
public void onReceive(Context context, Intent intent) {
Log.d(getName(), "Received broadcast for timezone/time change " + intent.getAction());
final String action = intent.getAction();
if (action.equals(Intent.ACTION_TIME_CHANGED) || action.equals(Intent.ACTION_TIMEZONE_CHANGED)) {
TimezoneHandlerModule.this.onTimezoneChangeCallback.invoke();
}
}
}
}
Two react methods are exposed start and stop. start takes a function as a parameter which is invoked whenever a broadcast for timezone changed or time changed is received. After hooking up the native module and starting the app in emulator, I opened Settings and change the timezone and I can see that the relevant logs are printed.
11-24 17:07:21.837 1597-1597/com.xyz D/TimezoneHandler: Received broadcast for timezone/time change
11-24 17:07:21.837 1597-1907/com.xyz I/ReactNativeJS: Detected timezone change
When I change the timezone again, I see below error in the logcat output
1-24 17:22:42.356 1597-1597/com.galarmapp D/TimezoneHandler: Received broadcast for timezone/time change
11-24 17:22:42.365 1597-1907/com.galarmapp E/ReactNativeJS: The callback start() exists in module TimezoneHandler, but only one callback may be registered to a function in a native module.
11-24 17:22:42.367 1597-1908/com.galarmapp E/unknown:React: The callback start() exists in module TimezoneHandler, but only one callback may be registered to a function in a native module., stack:
__invokeCallback#12814:10
<unknown>#12685:24
guard#12604:3
invokeCallbackAndReturnFlushedQueue#12684:6
From the error message, it seems as if I am trying to attach a separate callback to the start function but I am not doing any such thing. I am calling the start method in the componentWillMount of the top level component and have confirmed that it is not called twice. I see that other people have also seen this error while trying different things but still don't understand the reason behind the problem.
Please share if you have any insights.
According to the documentation http://facebook.github.io/react-native/docs/native-modules-android.html#callbacks - "A native module is supposed to invoke its callback only once. It can, however, store the callback and invoke it later." Once you have done invoke() on the callback, you cannot use it again.
This particular use case of time zone change is better solved by sending events to javascript. See this documentation http://facebook.github.io/react-native/docs/native-modules-android.html#sending-events-to-javascript
I've done a few small projects in camel now but one thing I'm struggling to understand is how to deal with big data (that doesn't fit into memory) when consuming in camel routes.
I have a database containing a couple of GBs worth of data that I would like to route using camel. Obviously reading all data into memory isn't an option.
If I were doing this as a standalone app I would have code that paged through the data and send chunks to my JMS enpoint. I'd like to use camel as it provides a nice pattern. If I were consuming from a file I could use the streaming() call.
Also should I use camel-sql/camel-jdbc/camel-jpa or use a bean to read from my database.
Hope everyone is still with me. I'm more familiar with the Java DSL but would appreciate any help/suggestions people can provide.
Update : 2-MAY-2012
So I've had some time to play around with this and I think what I'm actually doing is abusing the concept of a Producer so that I can use it in a route.
public class MyCustomRouteBuilder extends RouteBuilder {
public void configure(){
from("timer:foo?period=60s").to("mycustomcomponent:TEST");
from("direct:msg").process(new Processor() {
public void process(Exchange ex) throws Exception{
System.out.println("Receiving value" : + ex.getIn().getBody() );
}
}
}
}
My producer looks something like the following. For clarity I've not included the CustomEndpoint or CustomComponent as it just seems to be a thin wrapper.
public class MyCustomProducer extends DefaultProducer{
Endpoint e;
CamelContext c;
public MyCustomProducer(Endpoint epoint){
super(endpoint)
this.e = epoint;
this.c = e.getCamelContext();
}
public void process(Exchange ex) throws Exceptions{
Endpoint directEndpoint = c.getEndpoint("direct:msg");
ProducerTemplate t = new DefaultProducerTemplate(c);
// Simulate streaming operation / chunking of BIG data.
for (int i=0; i <20 ; i++){
t.start();
String s ="Value " + i ;
t.sendBody(directEndpoint, value)
t.stop();
}
}
}
Firstly the above doesn't seem very clean. It seems like the cleanest way to perform this would be to populate a jms queue (in place of direct:msg) via a scheduled quartz job that my camel route then consumes so that I can have more flexibility over the message size received within my camel pipelines. However I quite liked the semantics of setting up time based activations as part of the Route.
Does anyone have any thoughts on the best way to do this.
In my understanding, all you need to do is:
from("jpa:SomeEntity" +
"?consumer.query=select e from SomeEntity e where e.processed = false" +
"&maximumResults=150" +
"&consumeDelete=false")
.to("jms:queue:entities");
maximumResults defines a limit of how many entities you get per query.
When you finish the processing of an entity instance, you need to set e.processed = true; and persist() it, so that the entity won't be processed again.
One way to do that is with the #Consumed annotation:
class SomeEntity {
#Consumed
public void markAsProcessed() {
setProcessed(true);
}
}
Another thing, you need to be careful with is how you serialize the entity before sending it to the queue. You might need to use an enricher between the from and to.
I want to have several buses in one process. I googled about this and found that it is possible only if having several AppDomains. But I cannot make it work.
Here is my code sample (I do everything in one class library):
using System;
using System.Diagnostics;
using System.Reflection;
using MyMessages;
using NServiceBus;
using NServiceBus.Config;
using NServiceBus.Config.ConfigurationSource;
namespace Subscriber1
{
public class Sender
{
public static void Main()
{
var domain = AppDomain.CreateDomain("someDomain", AppDomain.CurrentDomain.Evidence);
domain.Load(Assembly.GetExecutingAssembly().GetName());
domain.CreateInstance(Assembly.GetExecutingAssembly().FullName, typeof (PluginBusCreator).FullName);
//here I have some code to send messages to "PluginQueue".
}
}
public class PluginBusCreator
{
public PluginBusCreator()
{
var Bus = Configure.With(
Assembly.Load("NServiceBus"), Assembly.Load("NServiceBus.Core"),
Assembly.LoadFrom("NServiceBus.Host.exe"), Assembly.GetCallingAssembly())
.CustomConfigurationSource(new PluginConfigurationSource())
.SpringFrameworkBuilder()
.XmlSerializer().MsmqTransport()
.UnicastBus().LoadMessageHandlers<First<SomeHandler>>().CreateBus().Start();
}
protected IBus Bus { get; set; }
}
class PluginConfigurationSource : IConfigurationSource
{
public T GetConfiguration<T>() where T : class
{
{
if (typeof (T) == typeof (MsmqTransportConfig))
return new MsmqTransportConfig
{
ErrorQueue = "error",
InputQueue = "PluginQueue",
MaxRetries = 1,
NumberOfWorkerThreads = 1
} as T;
return null;
}
}
}
public class SomeHandler : IHandleMessages<EventMessage1>
{
public void Handle(EventMessage1 message)
{
Debugger.Break();
}
}
}
And I don't get handler invoked.
If you have any ideas, please help. I'm fighting this problem a lot of time.
Also if full code need to be published, please tell.
I need several buses to solve the following problem :
I have my target application, and several plugins with it. We decided to make our plugins according to service bus pattern.
Each plugin can have several profiles.
So, target application(it is web app.) is publishing message, that something has changed in it. Each plugin which is subscribed to this message, need to do some action for each profile. But plugin knows nothing about its profiles (customers are writing plugins). Plugin should only have profile injected in it, when message handling started.
We decided to have some RecepientList (pattern is described in "Enterprise Integration Patterns"), which knows about plugin profiles, iterates through them and re-send messages with profiles injected.(So if plugin has several profiles, several messages will be sent to it).
But I don't want to have each plugin invoked in a new process. Perfectly I want to dynamically configure buses for each plugin during start. All in one process. But it seems I need to do it in separate AppDomains. So I have a problem described above:-).
Sergey,
I'm unclear as to why each plugin needs to have its own bus. Could they all not sit on the same bus? Each plugin developer would write their message handlers as before, and the subscriptions would happen automatically by the bus.
Then, also, you wouldn't need to specify to load each of the NServiceBus DLLs.
BTW, loading an assembly by name tends to cause problems - try using this to specify assemblies:
typeof(IMessage).Assembly, typeof(MsmqTransportConfig).Assembly, typeof(IConfigureThisEndpoint).Assembly