To cross the language boundary in Java side the class to be serialized needs to implement the DataSerializable interface; and in order to let the deserializer in c# know what class it is , we need to register a classID. Following the example, I write my class in Java like this:
public class Stuff implements DataSerializable{
static { // note that classID (7) must match C#
Instantiator.register(new Instantiator(Stuff.class,(byte)0x07) {
#Override
public DataSerializable newInstance() {
return new Stuff();
}
});
}
private Stuff(){}
public boolean equals(Object obj) {...}
public int hashCode() {...}
public void toData(DataOutput dataOutput) throws IOException {...}
public void fromData(DataInput dataInput) throws IOException, ClassNotFoundException { ...}
}
It looks OK but when I run it I get this exception:
[warning 2012/03/30 15:06:00.239 JST tid=0x1] Error registering
instantiator on pool:
com.gemstone.gemfire.cache.client.ServerOperationException: : While
performing a remote registerInstantiators at
com.gemstone.gemfire.cache.client.internal.AbstractOp.processAck(AbstractOp.java:247)
at
com.gemstone.gemfire.cache.client.internal.RegisterInstantiatorsOp$RegisterInstantiatorsOpImpl.processResponse(RegisterInstantiatorsOp.java:76)
at
com.gemstone.gemfire.cache.client.internal.AbstractOp.attemptReadResponse(AbstractOp.java:163)
at
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:363)
at
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:229)
at
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:321)
at
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:646)
at
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:108)
at
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:624)
at
com.gemstone.gemfire.cache.client.internal.RegisterInstantiatorsOp.execute(RegisterInstantiatorsOp.java:39)
at
com.gemstone.gemfire.internal.cache.PoolManagerImpl.allPoolsRegisterInstantiator(PoolManagerImpl.java:216)
at
com.gemstone.gemfire.internal.InternalInstantiator.sendRegistrationMessageToServers(InternalInstantiator.java:188)
at
com.gemstone.gemfire.internal.InternalInstantiator._register(InternalInstantiator.java:143)
at
com.gemstone.gemfire.internal.InternalInstantiator.register(InternalInstantiator.java:71)
at com.gemstone.gemfire.Instantiator.register(Instantiator.java:168)
at Stuff.(Stuff.java)
Caused by: java.lang.ClassNotFoundException: Stuff$1
I could not figure out why, is there anyone who has experience can help? Thanks in advance!
In most configurations GemFire servers need to deserialize objects in order to index them, run queries and call listeners. So when you register instantiator the class will be registered on all machines in the Distributed System. Hence, the class itself must be available for loading everywhere in the cluster.
As exception stack trace says the error happens on a remote node.
Check if you have the class Stuff on all machines participating in the cluster. At least on cache servers.
Related
(Trying to keep this simple.)
I have a (partial) ByteBuddy recipe like this:
builder
.method(someMatcher())
.intercept(MethodDelegation.to(this.interceptor));
I have an "interceptor" class defined like this:
private static final class Interceptor {
private Interceptor() {
super();
}
#RuntimeType
private final Object doSomething(#This final Proxy<?> proxy,
#SuperCall final Callable<?> callable,
#Origin final String methodSignature) throws Exception {
final Object proxiedInstance = proxy.getProxiedInstance();
// TODO: logic
return callable.call(); // for now
}
}
(The interceptor method needs to be non-static for various reasons not important here.)
When I create an instance of this ByteBuddy-defined class and call a simple public void blork() method on it, I get:
Cannot resolve ambiguous delegation of public void com.foo.TestExplorations$Frob.blork() to net.bytebuddy.implementation.bind.MethodDelegationBinder$MethodBinding$Builder$Build#3d101b05 or net.bytebuddy.implementation.bind.MethodDelegationBinder$MethodBinding$Builder$Build#1a9efd25
How can there be ambiguity when there is only one interceptor? What have I done wrong?
Byte Buddy just adds a method call to the instrumented class which needs to be able to see the target class. If it is private, it is ignored and Byte Buddy searches further up the hierarchy where it finally consideres the methods of Object which are all equally unsuited but therefore an ambiguity exception is thrown instead of an exception that no method could be bound.
Can homebody help me please to give me a hint how to redefine static methods using byte-buddy 1.6.9 ?
I have tried this :
public class Source {
public static String hello(String name) {return null;}
}
public class Target {
public static String hello(String name) {
return "Hello" + name+ "!";
}
}
String helloWorld = new ByteBuddy()
.redefine(Source.class)
.method(named("hello"))
.intercept(MethodDelegation.to(Target.class))
.make()
.load(getClass().getClassLoader())
.getLoaded()
.newInstance()
.hello("World");
I got following Exception :
Exception in thread "main" java.lang.IllegalStateException: Cannot inject already loaded type: class delegation.Source
Thanks
Classes can only be loaded once by each class loader. In order to replace a method, you would need to use a Java agent to hook into the JVM's HotSwap feature.
Byte Buddy provides a class loading strategy that uses such an agent, use:
.load(Source.class.getClassLoader(),
ClassReloadingStrategy.fromInstalledAgent());
This does however require you to install a Java agent. On a JDK, you can do so programmatically, by ByteBuddyAgent.install() (included in the byte-buddy-agent artifact). On a JVM, you have to specify the agent on the command line.
On my service layer I have injected an UnitOfWork and 2 repositories in the constructor. The Unit of Work and repository have an instance of a DbContext I want to share between the two of them. How can I do that with Ninject ? Which scope should be considered ?
I am not in a web application so I can't use InRequestScope.
I try to do something similar... and I am using DI however, I need my UoW to be Disposed and created like this.
using (IUnitOfWork uow = new UnitOfWorkFactory.Create())
{
_testARepository.Insert(a);
_testBRepository.Insert(b);
uow.SaveChanges();
}
EDIT: I just want to be sure i understand… after look at https://github.com/ninject/ninject.extensions.namedscope/wiki/InNamedScope i though about my current console application architecture which actually use Ninject.
Lets say :
Class A is a Service layer class
Class B is an unit of work which take into parameter an interface (IContextFactory)
Class C is a repository which take into parameter an interface (IContextFactory)
The idea here is to be able to do context operations on 2 or more repository and using the unit of work to apply the changes.
Class D is a context factory (Entity Framework) which provide an instance (keep in a container) of the context which is shared between Class B et C (.. and would be for other repositories aswell).
The context factory keep the instance in his container so i don’t want to reuse this instance all the name since the context need to be disposed at the end of the service operaiton.. it is the main purpose of the InNamedScope actually ?
The solution would be but i am not sure at all i am doing it right, the services instance gonna be transcient which mean they actually never disposed ? :
Bind<IScsContextFactory>()
.To<ScsContextFactory>()
.InNamedScope("ServiceScope")
.WithConstructorArgument(
"connectionString",
ConfigurationUtility.GetConnectionString());
Bind<IUnitOfWork>().To<ScsUnitOfWork>();
Bind<IAccountRepository>().To<AccountRepository>();
Bind<IBlockedIpRepository>().To<BlockedIpRepository>();
Bind<IAccountService>().To<AccountService>().DefinesNamedScope("ServiceScope");
Bind<IBlockedIpService>().To<BlockedIpService>().DefinesNamedScope("ServiceScope");
UPDATE: This approach works against NuGet current, but relies in an anomaly in the InCallscope implementation which has been fixed in the current Unstable NuGet packages. I'll be tweaking this answer in a few days to reflect the best approach after some mulling over. NB the high level way of structuring stuff will stay pretty much identical, just the exact details of the Bind<DbContext>() scoping will work. (Hint: CreateNamedScope in unstable would work or one could set up the Command Handler as DefinesNamedScope. Reason I dont just do that is that I want to have something that composes/plays well with InRequestScope)
I highly recommend reading the Ninject.Extensions.NamedScope integration tests (seriously, find them and read and re-read them)
The DbContext is a Unit Of Work so no further wrapping is necessary.
As you want to be able to have multiple 'requests' in flight and want to have a single Unit of Work shared between them, you need to:
Bind<DbContext>()
.ToMethod( ctx =>
new DbContext(
connectionStringName: ConfigurationUtility.GetConnectionString() ))
.InCallScope();
The InCallScope() means that:
for a given object graph composed for a single kernel.Get() Call (hence In Call Scope), everyone that requires an DbContext will get the same instance.
the IDisposable.Dispose() will be called when a Kernel.Release() happens for the root object (or a Kernel.Components.Get<ICache>().Clear() happens for the root if it is not .InCallScope())
There should be no reason to use InNamedScope() and DefinesNamedScope(); You don't have long-lived objects you're trying to exclude from the default pooling / parenting / grouping.
If you do the above, you should be able to:
var command = kernel.Get<ICommand>();
try {
command.Execute();
} finally {
kernel.Components.Get<ICache>().Clear( command ); // Dispose of DbContext happens here
}
The Command implementation looks like:
class Command : ICommand {
readonly IAccountRepository _ar;
readonly IBlockedIpRepository _br;
readonly DbContext _ctx;
public Command(IAccountRepository ar, IBlockedIpRepository br, DbContext ctx){
_ar = ar;
_br = br;
_ctx = ctx;
}
void ICommand.Execute(){
_ar.Insert(a);
_br.Insert(b);
_ctx.saveChanges();
}
}
Note that in general, I avoid having an implicit Unit of Work in this way, and instead surface it's creation and Disposal. This makes a Command look like this:
class Command : ICommand {
readonly IAccountService _as;
readonly IBlockedIpService _bs;
readonly Func<DbContext> _createContext;
public Command(IAccountService #as, IBlockedIpServices bs, Func<DbContext> createContext){
_as = #as;
_bs = bs;
_createContext = createContext;
}
void ICommand.Execute(){
using(var ctx = _createContext()) {
_ar.InsertA(ctx);
_br.InsertB(ctx);
ctx.saveChanges();
}
}
This involves no usage of .InCallScope() on the Bind<DbContext>() (but does require the presence of Ninject.Extensions.Factory's FactoryModule to synthesize the Func<DbContext> from a straightforward Bind<DbContext>().
As discussed in the other answer, InCallScope is not a good approach to solving this problem.
For now I'm dumping some code that works against the latest NuGet Unstable / Include PreRelease / Instal-Package -Pre editions of Ninject.Web.Common without a clear explanation. I will translate this to an article in the Ninject.Extensions.NamedScope wiki at some stagehave started to write a walkthrough of this technique in the Ninject.Extensions.NamedScope wiki's CreateNamedScope/GetScope article.
Possibly some bits will become Pull Request(s) at some stage too (Hat tip to #Remo Gloor who supplied me the outline code). The associated tests and learning tests are in this gist for now), pending packaging in a proper released format TBD.
The exec summary is you Load the Module below into your Kernel and use .InRequestScope() on everything you want created / Disposed per handler invocation and then feed requests through via IHandlerComposer.ComposeCallDispose.
If you use the following Module:
public class Module : NinjectModule
{
public override void Load()
{
Bind<IHandlerComposer>().To<NinjectRequestScopedHandlerComposer>();
// Wire it up so InRequestScope will work for Handler scopes
Bind<INinjectRequestHandlerScopeFactory>().To<NinjectRequestHandlerScopeFactory>();
NinjectRequestHandlerScopeFactory.NinjectHttpApplicationPlugin.RegisterIn( Kernel );
}
}
Which wires in a Factory[1] and NinjectHttpApplicationPlugin that exposes:
public interface INinjectRequestHandlerScopeFactory
{
NamedScope CreateRequestHandlerScope();
}
Then you can use this Composer to Run a Request InRequestScope():
public interface IHandlerComposer
{
void ComposeCallDispose( Type type, Action<object> callback );
}
Implemented as:
class NinjectRequestScopedHandlerComposer : IHandlerComposer
{
readonly INinjectRequestHandlerScopeFactory _requestHandlerScopeFactory;
public NinjectRequestScopedHandlerComposer( INinjectRequestHandlerScopeFactory requestHandlerScopeFactory )
{
_requestHandlerScopeFactory = requestHandlerScopeFactory;
}
void IHandlerComposer.ComposeCallDispose( Type handlerType, Action<object> callback )
{
using ( var resolutionRoot = _requestHandlerScopeFactory.CreateRequestHandlerScope() )
foreach ( object handler in resolutionRoot.GetAll( handlerType ) )
callback( handler );
}
}
The Ninject Infrastructure stuff:
class NinjectRequestHandlerScopeFactory : INinjectRequestHandlerScopeFactory
{
internal const string ScopeName = "Handler";
readonly IKernel _kernel;
public NinjectRequestHandlerScopeFactory( IKernel kernel )
{
_kernel = kernel;
}
NamedScope INinjectRequestHandlerScopeFactory.CreateRequestHandlerScope()
{
return _kernel.CreateNamedScope( ScopeName );
}
/// <summary>
/// When plugged in as a Ninject Kernel Component via <c>RegisterIn(IKernel)</c>, makes the Named Scope generated during IHandlerFactory.RunAndDispose available for use via the Ninject.Web.Common's <c>.InRequestScope()</c> Binding extension.
/// </summary>
public class NinjectHttpApplicationPlugin : NinjectComponent, INinjectHttpApplicationPlugin
{
readonly IKernel kernel;
public static void RegisterIn( IKernel kernel )
{
kernel.Components.Add<INinjectHttpApplicationPlugin, NinjectHttpApplicationPlugin>();
}
public NinjectHttpApplicationPlugin( IKernel kernel )
{
this.kernel = kernel;
}
object INinjectHttpApplicationPlugin.GetRequestScope( IContext context )
{
// TODO PR for TrgGetScope
try
{
return NamedScopeExtensionMethods.GetScope( context, ScopeName );
}
catch ( UnknownScopeException )
{
return null;
}
}
void INinjectHttpApplicationPlugin.Start()
{
}
void INinjectHttpApplicationPlugin.Stop()
{
}
}
}
Ok, so I have asked another question on the same topic here and while I did not get a direct answer there I've pulled together some code that I got working to do what I wanted. Question is, does this way break some OOP principle?
What I wanted
Use proper OOP to declare fault types on a service
Have one catch block in the client side that can handle multiple types of exceptions thrown from the service
Have one HandleException method per fault class that has its own implementation
On the client side have just one exception block understand what exception was thrown and call the respective HandleException method from the corresponding fault class
How I got it working
Declared a fault contract on server for each exception type that inherits from a base exception type
[DataContract]
public class BusinessRuleViolationFault : BaseFault
{
public BusinessRuleViolationFault(string message)
: base(message)
{
}
}
[DataContract]
public class SomeOtherViolationFault : BaseFault
{
public SomeOtherViolationFault(string message)
: base(message)
{
}
}
[DataContract]
public abstract class BaseFault
{
public BaseFault(string message)
{
Message = message;
}
}
On the client side I created partial classes of the same fault types as above and implemented the handle exception method in it. I had to do this on the client side since if I created this method on the service side it would not get serialized and be available via the proxy.
public partial class BusinessRuleViolationFault : BaseFault
{
public override void HandleException()
{
MessageBox.Show("BusinessRuleViolationFault handled");
}
}
public partial class SomeOtherViolationFault : BaseFault
{
public override void HandleException()
{
MessageBox.Show("SomeOtherViolationFault handled");
}
}
public abstract partial class BaseFault
{
public abstract void HandleException();
}
Then created an extension method on the faultexception class as per Christians code which I have marked as accepted answer in my previous post. This basically used reflection to get me the name of the fault exception class that was thrown.
Then in my client catch block I used that name to create an instance of the locally created partial class which has the handle exception method.
What I am curious to know is, have I broken some OOP principle here?
Is this OOP at all?
I dont want multiple if else statement in this one catch block or have multiple catch blocks. What is your opinion on the tradeoff of using one catch block to gain performance and lose it with reflection by trying to figure out what class method to call?
Thanks for your time and patience ...
I don't understand exactly why reflection is needed here (as described in the previous posted question). I simply do this in my code and it works fine:
try
{
proxy.CallServiceMethod(message);
}
catch (Exception e)
{
if (e is FaultException<BaseFault>)
{
BaseFault exceptionToHandle =
(e as FaultException<BaseFault>).Detail as BaseFault;
exceptionToHandle.HandleException();
}
}
Aside from the unnecessary reflection, I don't see anything wrong with the way you have implemented this (from an OOP point of view at least).
I'm working on a project using Java RMI.
This is the class causing problem:
public class FSFile implements Serializable
{
public static final int READ = 0;
public static final int WRITE = 1;
private int flag;
private String filename;
private transient BufferedWriter writer;
private transient BufferedReader reader;
...
private void writeObject(ObjectOutputStream stream) throws IOException
{
stream.defaultWriteObject();
stream.writeObject(writer);
stream.writeObject(reader);
}
private void readObject(ObjectInputStream stream) throws IOException, ClassNotFoundException
{
stream.defaultReadObject();
writer = (BufferedWriter) stream.readObject();
reader = (BufferedReader) stream.readObject();
}
}
Basically, I use RMI to send that FSFile object to another process locally (for now) and here's the error I get:
java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException;
java.io.BufferedReader
To be more precise, there's one class named FileService which use a function fetch() from a FileServer to get a FSFile in return. There is nothing special in the fetch() function, it just creates a FSFile and returns it. All communications between those classes are made via RMI.
How come I have an error like this ?
You can't serialize readers and writers. It makes no sense. It's like trying to send a telephone over a telephone line. If you want to send a file, send the file.
And your code just calls writeObject on these objects as though they were Serializable. They aren't. Otherwise you could have made them non-transient and omitted the custom readObject and writeObject methods altogether. Just re-coding what the system would have done anyway doesn't change anything. It certainly doesn't make classes Serializable that aren't.
If you don't want to re-implement file/stream sending via RMI you could look into RMIIO, it handles such things in concise and effective way.