Restlet: Why does chaining ClientResource.getChild() yield siblings rather than nested resources? - restlet

The following sample attempts to use two different methods to perform path composition for nested resources. The first uses ClientResource.getChild(), while the second creates a new Reference from the original ClientResource's reference, and then adds two segments before constructing a new ClientResource for that path.
public static void main( String [] args ) {
// I would expect that this:
ClientResource res1 = new ClientResource( "http://localhost" );
ClientResource res2 = res1.getChild( "foo" );
ClientResource res3 = res2.getChild( "bar" );
System.out.printf( "res3 points to %s\n", res3.getReference().getTargetRef().toString() );
// Would be the same as this:
Reference ref = new Reference( res1.getReference() );
ref.addSegment( "foo" ).addSegment( "bar" );
ClientResource res4 = new ClientResource( ref );
System.out.printf( "res4 points to %s\n", res4.getReference().getTargetRef().toString() );
}
Here is the output:
res3 points to http://localhost/bar
res4 points to http://localhost/foo/bar
Interestingly, if I include a / when creating res2 (res1.getChild( "foo/" );), I get the result I was expecting. However, that doesn't help me if I'm first deriving resource foo, and later want to derive its child resource bar.
So is this a bug, or is there a purpose to this behavior?
Note: I'm using Restlet 2.3.0
Edit: This has proven very distressing to me, because the presence of getChild was one of the reasons I selected Restlet over Jersey. Between this apparent bug and the resource path conventions implied by Restlet, I believe I will be using Jersey for future projects.

Related

ANTLR4 store all tokens in specific channel

I have a lexer which puts every token the parser is interested in into the default channel and all comment-tokens in channel 1.
The default channel is used to create the actual tree while the comment channel is used to seperate the tokens and to store all comments.
Look at this scribble:
In chapter 12.1 p. 206-208 in The Definitive ANTLR4 Reference there is a comparable situation where comment tokens are shifted inside the token stream. The represented approach is to read out the comment-channel in an exit-method inside the parser.
In my opinion this is a very rough option for my problem, because i don't want to overwhelm my listener with that back-looking operations. Is there a possibility to override a method which puts tokens inside the comment-channel?
It looks like you misunderstand how channels work in ANTLR. What happens is that the lexer, as it comes along a token, assigns the default channel (just a number) during initialization of the token. That value is only changed when the lexer finds a -> channel() command or you change it explicitely in a lexer action. So there is nothing to do in a listener or whatever to filter out such tokens.
Later when you want to get all tokens "in" a given channel (i.e. all tokens that have a specific channel number assigned) you can just iterate over all tokens returned by your token stream and compare the channel value. Alternatively you can create a new CommonTokenStream instance and pass it the channel your are interested in. It will then only give you those tokens from that channel (it uses a token source, e.g. a lexer, to get the actual tokens and cache them).
I found out, that there is a easy way to override how tokens are created. To do this, one can override a method inside the CommonTokenFactory and give it to the Lexer. At this point i can check the channel and i am able to push the tokens in a separate set.
In my opinion this is a little bit hacky, but i do not need to iterate over the whole commonTokenStream later on.
This code is only to demonstrate the idea behind (in C#) .
internal class HeadAnalyzer
{
#region Methods
internal void AnalyzeHeader(Stream headerSourceStream)
{
var antlrFileStream =
new AntlrInputStream(headerSourceStream);
var mcrLexer = new MCRLexer(antlrFileStream);
var commentSaverTokenFactory = new MyTokenFactory();
mcrLexer.TokenFactory = commentSaverTokenFactory;
var commonTokenStream = new CommonTokenStream(mcrLexer);
var mcrParser = new MCRParser(commonTokenStream);
mcrParser.AddErrorListener(new DefaultErrorListener());
MCRParser.ProgramContext tree;
try
{
tree = mcrParser.program(); // create the tree
}
catch (SyntaxErrorException syntaxErrorException)
{
throw new NotImplementedException();
}
var headerContext = new HeaderContext();
var headListener = new HeadListener(headerContext);
ParseTreeWalker.Default.Walk(headListener, tree);
var comments = commentSaverTokenFactory.CommentTokens; // contains all comments :)
}
#endregion
}
internal class MyTokenFactory : CommonTokenFactory
{
internal readonly List<CommonToken> CommentTokens = new List<CommonToken>();
public override CommonToken Create(Tuple<ITokenSource, ICharStream> source, int type, string text, int channel, int start, int stop, int line, int charPositionInLine)
{
var token = base.Create(source, type, text, channel, start, stop, line, charPositionInLine);
if (token.Channel == 1)
{
CommentTokens.Add(token);
}
return token;
}
}
Maybe there are some better approaches. For my usecase it works as expected.

BIRT PDF render: fonts register time

We use BIRT since version 2 (currently using 4.2.2) and have always been plagued by the PDF (itext?) fonts register time.
org.eclipse.birt.report.engine.layout.pdf.font.FontMappingManagerFactory$2 run
INFO: register fonts in c:/windows/fonts cost:17803ms
This process only occurs the first time the render is used. Subsequent renders are not problematic.
The problem seems to be the time wasted when accessing ALL the system connected DRIVES.
Editing the fontsConfig.xml in org.eclipse.birt.report.engine.fonts plugin, reducing the search paths does not resolve the issue. ALL connected drives are accessed by BIRT.
<font-paths>
<path path="/windows/fonts" />
</font-paths>
Is there a simple solution for this without having to render a "dummy" report to initialize BIRT in the background??
This isn't necessarily a solution, but potentially another option as a workaround that I came up with after we noticed the same issue and did some investigating. It's also (IMO) better than generating a dummy report.
On startup (or at some other point depending on your needs), make the following call:
FontMappingManagerFactory.getInstance().getFontMappingManager(format, Locale.getDefault());
Why?
BIRT uses com.lowagie.text.FontFactory (iText) to register the fonts. Calls to that class are made from
org.eclipse.birt.report.engine.layout.pdf.font.FontMappingManagerFactory which also spits out the log entries given in the question.
Within FontMappingManagerFactory we can see where the log entries are coming from:
private static void registerFontPath( final String fontPath )
{
AccessController.doPrivileged( new PrivilegedAction<Object>( ) {
public Object run( )
{
long start = System.currentTimeMillis( );
File file = new File( fontPath );
if ( file.exists( ) )
{
if ( file.isDirectory( ) )
{
FontFactory.registerDirectory( fontPath );
}
else
{
FontFactory.register( fontPath );
}
}
long end = System.currentTimeMillis( );
logger.info( "register fonts in " + fontPath + " cost:"
+ ( end - start ) + "ms" ); // <-- Here!
return null;
}
} );
}
Working backwards, we see that registerFontPath(String) is called by loadFontMappingConfig(URL), etc etc resulting in the following call hierarchy:
getFontMappingManager(String, Locale)
`-- createFontMappingManager(String, Locale)
`-- loadFontMappingConfig(String)
`-- loadFontMappingConfig(URL)
`-- registerFontPath(String)
And getFontMappingManager(String, Locale) is the public method we can call. More importantly, however, is that the method also caches the FontMappingManager that gets created:
public synchronized FontMappingManager getFontMappingManager(
String format, Locale locale )
{
HashMap managers = (HashMap) cachedManagers.get( format );
if ( managers == null )
{
managers = new HashMap( );
cachedManagers.put( format, managers );
}
FontMappingManager manager = (FontMappingManager) managers.get( locale );
if ( manager == null )
{
manager = createFontMappingManager( format, locale );
managers.put( locale, manager );
}
return manager;
}
As a result, when you're ready to go generate your PDF, it will already be in the cache and BIRT won't have to go call down to the FontFactory and re-register the fonts.
But what about the format String?
This bit is some speculation, but I think the valid options are the OUTPUT_FORMAT_XXX Strings in IRenderOption. For our purposes I debugged to see that we want the String to be pdf. Considering that's also conveniently the desired output format, I assume IRenderOption.OUTPUT_FORMAT_PDF is the route to go.
If you're ultimately creating both PDFs and HTML files, it appears that you could make the call twice (once with IRenderOption.OUTPUT_FORMAT_PDF and once with IRenderOption.OUTPUT_FORMAT_HTML) and only the font config files which are different will be considered (ie. you won't be reading from c:/windows/fonts twice).
All that said, take this with a grain of salt. I believe this is completely safe, since the purpose of getFontMappingManager(String, Locale) is to get an object for accessing available fonts, etc., and it conveniently caches the result. However, if that were to change in the future you may end up with a tricky-to-find bug on your hands.
I would suggest that you can modify the fontsConfig.xml and remove the fonts that you no longer need. Also remove the drives that you dont want birt to check for fonts.

How to extract class IL code from loaded assembly and save to disk?

How would I go about extracting the IL code for classes that are generated at runtime by reflection so I can save it to disk? If at all possible. I don't have control of the piece of code that generates these classes.
Eventually, I would like to load this IL code from disk into another assembly.
I know I could serialise/deserialise classes but I wish to use purely IL code. I'm not fussed with the security implications.
Running Mono 2.10.1
Or better yet, use Mono.Cecil.
It will allow you to get at the individual instructions, even manipulating them and disassembling them (with the mono decompiler addition).
Note that the decompiler is a work in progress (last time I checked it did not fully support lambda expressions and Visual Basic exception blocks), but you can have pretty decompiled output in C# pretty easily as far as you don't hit these boundary conditions. Also, work has progressed since.
Mono Cecil in general let's you write the IL to a new assembly, as well, which you can then subsequently load into your appdomain if you like to play with bleeding edge.
Update I came round to trying this. Unfortunately I think I found what problem you run into. It turns out there is seems to be no way to get at the IL bytes for a generated type unless the assembly happened to get written out somewhere you can load it from.
I assumed you could just get the bits via reflection (since the classes support the required methods), however the related methods just raise an exception The invoked member is not supported in a dynamic module. on invocation. You can try this with the code below, but in short I suppose it means that it ain't gonna happen unless you want to f*ck with Marshal::GetFunctionPointerForDelegate(). You'd have to binary dump the instructions and manually disassemble them as IL opcodes. There be dragons.
Code snippet:
using System;
using System.Linq;
using Mono.Cecil;
using Mono.Cecil.Cil;
using System.Reflection.Emit;
using System.Reflection;
namespace REFLECT
{
class Program
{
private static Type EmitType()
{
var dyn = AppDomain.CurrentDomain.DefineDynamicAssembly(new AssemblyName("Emitted"), AssemblyBuilderAccess.RunAndSave);
var mod = dyn.DefineDynamicModule("Emitted", "Emitted.dll");
var typ = mod.DefineType("EmittedNS.EmittedType", System.Reflection.TypeAttributes.Public);
var mth = typ.DefineMethod("SuperSecretEncryption", System.Reflection.MethodAttributes.Public | System.Reflection.MethodAttributes.Static, typeof(String), new [] {typeof(String)});
var il = mth.GetILGenerator();
il.EmitWriteLine("Emit was here");
il.Emit(System.Reflection.Emit.OpCodes.Ldarg_0);
il.Emit(System.Reflection.Emit.OpCodes.Ret);
var result = typ.CreateType();
dyn.Save("Emitted.dll");
return result;
}
private static Type TestEmit()
{
var result = EmitType();
var instance = Activator.CreateInstance(result);
var encrypted = instance.GetType().GetMethod("SuperSecretEncryption").Invoke(null, new [] { "Hello world" });
Console.WriteLine(encrypted); // This works happily, print "Emit was here" first
return result;
}
public static void Main (string[] args)
{
Type emitted = TestEmit();
// CRASH HERE: even if the assembly was actually for SaveAndRun _and_ it
// has actually been saved, there seems to be no way to get at the image
// directly:
var ass = AssemblyFactory.GetAssembly(emitted.Assembly.GetFiles(false)[0]);
// the rest was intended as mockup on how to isolate the interesting bits
// but I didn't get much chance to test that :)
var types = ass.Modules.Cast<ModuleDefinition>().SelectMany(m => m.Types.Cast<TypeDefinition>()).ToList();
var typ = types.FirstOrDefault(t => t.Name == emitted.Name);
var operands = typ.Methods.Cast<MethodDefinition>()
.SelectMany(m => m.Body.Instructions.Cast<Instruction>())
.Select(i => i.Operand);
var requiredTypes = operands.OfType<TypeReference>()
.Concat(operands.OfType<MethodReference>().Select(mr => mr.DeclaringType))
.Select(tr => tr.Resolve()).OfType<TypeDefinition>()
.Distinct();
var requiredAssemblies = requiredTypes
.Select(tr => tr.Module).OfType<ModuleDefinition>()
.Select(md => md.Assembly.Name as AssemblyNameReference);
foreach (var t in types.Except(requiredTypes))
ass.MainModule.Types.Remove(t);
foreach (var unused in ass.MainModule
.AssemblyReferences.Cast<AssemblyNameReference>().ToList()
.Except(requiredAssemblies))
ass.MainModule.AssemblyReferences.Remove(unused);
AssemblyFactory.SaveAssembly(ass, "/tmp/TestCecil.dll");
}
}
}
If all you want is the IL for your User class, you already have it. It's in the dll that you compiled it to.
From your other assembly, you can load the dll with the User class dynamically and use it through reflection.
UPDATE:
If what you have is a dynamic class created with Reflection.Emit, you have an AssemblyBuilder that you can use to save it to disk.
If your dynamic type was instead created with Mono.Cecil, you have an AssemblyDefinition that you can save to disk with myAssemblyDefinition.Write("MyAssembly.dll") (in Mono.Cecil 0.9).

ANTLR forward references

I need to create a grammar for a language with forward references. I think that the easiest way to achieve this is to make several passes on the generated AST, but I need a way to store symbol information in the tree.
Right now my parser correctly generates an AST and computes scopes of the variables and function definitions. The problem is, I don't know how to save the scope information into the tree.
Fragment of my grammar:
composite_instruction
scope JScope;
#init {
$JScope::symbols = new ArrayList();
$JScope::name = "level "+ $JScope.size();
}
#after {
System.out.println("code block scope " +$JScope::name + " = " + $JScope::symbols);
}
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction*)
;
I would like to put a reference to current scope into a tree, something like:
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction* {$JScope::symbols})
Is it even possible? Is there any other way to store current scopes in a generated tree? I can generate the scope info in a tree grammar, but it won't change anything, because I still have to store it somewhere for the second pass on the tree.
To my knowledge, the syntax for the rewrite rules doesn't allows for directly assigning values as your tentative snippet suggests. This is in part due to the fact that the parser wouldn't really know to what part of the tree/node the values should be added to.
However, one of the cool features of ANTLR-produced ASTs is that the parser makes no assumptions about the type of the Nodes. One just needs to implement a TreeAdapator which serves as a factory for new nodes and as a navigator of the tree structure. One can therefore stuff whatever info may be needed in the nodes, as explained below.
ANTLR provides a default tree node implementation, CommonTree, and in most cases (as in the situation at hand) we merely need to
subclass CommonTree by adding some custom fields to it
subclass the CommonTreeAdaptor to override its create() method, i.e. the way it produces new nodes.
but one could also create a novel type of node altogher, for some odd graph structure or whatnot. For the case at hand, the following should be sufficient (adapt for the specific target language if this isn't java)
import org.antlr.runtime.tree.*;
import org.antlr.runtime.Token;
public class NodeWithScope extends CommonTree {
/* Just declare the extra fields for the node */
public ArrayList symbols;
public string name;
public object whatever_else;
public NodeWithScope (Token t) {
super(t);
}
}
/* TreeAdaptor: we just need to override create method */
class NodeWithScopeAdaptor extends CommonTreeAdaptor {
public Object create(Token standardPayload) {
return new NodeWithScope(standardPayload);
}
}
One then needs to slightly modify the way the parsing process is started, so that ANTLR (or rather the ANTLR-produced parser) knows to use the NodeWithScopeAdaptor rather than CommnTree.
(Step 4.1 below, the rest if rather standard ANTLR test rig)
// ***** Typical ANTLR pipe rig *****
// ** 1. input stream
ANTLRInputStream input = new ANTLRInputStream(my_input_file);
// ** 2, Lexer
MyGrammarLexer lexer = new MyGrammarLexer(input);
// ** 3. token stream produced by lexer
CommonTokenStream tokens = new CommonTokenStream(lexer);
// ** 4. Parser
MyGrammarParser parser = new MyGrammarParser(tokens);
// 4.1 !!! Specify the TreeAdapter
NodeWithScopeAdaptor adaptor = new NodeWithScopeAdaptor();
parser.setTreeAdaptor(adaptor); // use my adaptor
// ** 5. Start process by invoking the root rule
r = parser.MyTopRule();
// ** 6. AST tree
NodeWithScope t = (NodeWithScope)r.getTree();
// ** 7. etc. parse the tree or do whatever is needed on it.
Finally your grammar would have to be adapted with something akin to what follows
(note that the node [for the current rule] is only available in the #after section. It may however reference any token attribute and other contextual variable from the grammar-level, using the usual $rule.atrribute notation)
composite_instruction
scope JScope;
#init {
$JScope::symbols = new ArrayList();
$JScope::name = "level "+ $JScope.size();
}
#after {
($composite_instruction.tree).symbols = $JScope::symbols;
($composite_instruction.tree).name = $JScope::name;
($composite_instruction.tree).whatever_else
= new myFancyObject($x.Text, $y.line, whatever, blah);
}
: '{' instruction* '}' -> ^(INSTRUCTION_LIST instruction*)
;

Can Rhino stub out a dictionary so that no matter what key is used the same value comes back?

var fakeRoles = MockRepository.GenerateStub < IDictionary<PermissionLevel, string>>();
fakeRoles[PermissionLevel.Developer] = "Developer";
fakeRoles[PermissionLevel.DeveloperManager] = "Developer Manager";
This is specific to what that method happens to be calling, and is irrelevant for the sake of my unit test.
I'd rather do this:
fakeRoles.Stub(r => r[PermissionLevel.None]).IgnoreArguments().Return("Developer");
But I get an exception telling me to set the properties directly. Is there a way to tell rhino to just return the same value for any key given to this stub IDictionary?
What you are trying to do is not a stub (in RhinoMock's understanding), you have to create a mock:
var fakeRoles = MockRepository.GenerateMock < IDictionary<PermissionLevel, string>>();
fakeRoles.Expect(r => r[PermissionLevel.None]).IgnoreArguments().Return("Developer");