In the apache brooklyn web interface we would like to display some content for the sytsem managers. The content is too long to be served as a simple sensor value.
Our idea was to create a task and write the content into the output stream of the task, and then offer the REST based URL to the managers like this:
/v1/activities/{task}/stream/stdout (Of course the link masked with some nice text)
The stream and task is created like this:
LOG.info("{} Creating Activity for ClusterReport Feed", this);
activity = Tasks.builder().
displayName("clusterReportFeed").
description("Output for the Cluster Report Feed").
body(new Runnable() {
#Override
public void run() {
//DO NOTHING
}
}).
parallel(true).
build();
LOG.info("{} Task Created with Id: " + activity.getId(), this);
Entities.submit(server, activity).getUnchecked();
The task seems to be created and the interraction works perfectly fine.
However when I want to access the tasks output stream from my browser using a prepared URL I get the error that the task does not exist.
Our idea is that we are not in the right Management/Execution Context. The Web page is running in an other context compared to the entities and their sensors. How can we put a task so that it's visible for the web consoles context also.
Is it possible to write the content into a file and then offer it for download via Jetty(brooklyns web server)? That would be a much simpler way.
Many tasks in Brooklyn default to being transient - i.e. they are deleted shortly after they complete (things like effector invocations are by default non-transient).
You can mark your task as non-transient using the code below in your use of the task builder:
.tag(BrooklynTaskTags.NON_TRANSIENT_TASK_TAG)
However, note that (as of Brooklyn version 0.9.0) tasks are kept in-memory using soft references. This means the stdout of the task will likely be lost at some point in the future, when that memory is needed for other in-memory objects.
For your use-case, would it make sense to have this as an effector result perhaps?
Or could you write to an object store such as S3 instead? The S3-approach would seem best to me.
For writing it to a file, care must be taken when used with Brooklyn high-availability. Would you write to a shared volume?
If you do write to a file, then you'd need to provide a web-extension so that people can access the contents of that file. As of Brooklyn 0.9.0, you can add your own WARs in code when calling BrooklynLauncher (which calls BrooklynWebServer).
Related
I am introducing DataStore in my Android Compose app for storing user preferences. While I am not happy about keeping the DataStore instance as a attribute of Context instance - because the Context is accessible from the #Composable only (and not in, e.g. repository) - I am still going for it.
Lets assume (following the references tutorial), that getEmail is the function that reads the DataStore key-value pair and that returns the Flow instance.
My intention is to put the following (approxiamte) code into one of my top-level #Composable, which has AppContainer as an argument - such composables are very top level:
var email = getEmail.collectAsState("") //or should I use single()?
appContainer.salesOrderRespository.setEmail(email)
But I am afraid to do this in this very crude way in which I have wrote this above. E.g. I am concerned about the following things:
should I put this code in some scope (because its collectAsState can block the GUI thread), like:
val scope = rememberCoroutineScope()
scope.launch {
var email = getEmail.collectAsState("") //or should I use single()?
appContainer.salesOrderRespository.setEmail(email)
}
Can I use construction var email = getEmail.collectAsState("") - email can not be accessible immediately, it is assigned asynchronously. That is why I may need something like this (just imagination):
getEmail.collectAsState("").onReadingDone( it - > { appContainer.salesOrderRespository.setEmail(it) })
And, of course, I am eager to execute this code as early as possible. Almost of my repositories are in need of configuration data and if application is starting and going ahead while still reading the configuration data from the DataStore, then the app will not work as expected.
So - I am trying to to just one thing - read DataStore (1. as attribute of the Context, because there is no other proper global instance to whom I can create such attribute; 2. inside #Composable, because Context can be accessed from the #Composable only) and assigne the read value to the attributes of one or more repositories. But I observed that this operation is very complex and involved many concerns. I listed them. That is my question is quite long and complex, but it effectively tries to tackle one simple thing only - reading+assigning.
You don't have to use the Context helper. Instead, I would look into integrating Hilt and injecting your data store into your repository.
Here is a blog about the technique.
https://medium.com/androiddevelopers/datastore-and-dependency-injection-ea32b95704e3
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
I am using JENA to create a triple store (TDB functionality) with the following code:
public void createTDBFromOWL() {
Dataset dataset = TDBFactory.createDataset(newTripleStoreLocation);
dataset.begin(ReadWrite.WRITE);
try {
//getting the model inside the transaction
Model model = dataset.getDefaultModel();
FileManager fileManager=FileManager.get();
Model holder=fileManager.readModel(model, newOWLFileLocation);
//committing dataset
dataset.commit();
model.close();
holder.close();
} finally {
dataset.end();
dataset.close();
}
}
After I create the triple store, the files created are locked by my application server (Glassfish), and I can't delete them until I manually stop Glassfish and it releases its lock. As shown in the above code, I think I am closing everything, so I don't get why a lock is maintained on the files.
When you call Dataset#close(), the implementation delegates that call to an underlying
DatasetGraphBase#close(), which then ultimately delegates to DatasetGraphTDB#_close().
This results in calls to TripleTable#close() and QuadTable#close(). Both of these call (several) NodeTupleTable#close(). Continuing with the indirection, this calls NodeTable#close() and TupleTable#close(). The former is an interface, so we'd need to make a proper guess as to which class is run in your implementation. The latter iterates through a collection of TupleIndex objects and calls close() on each of them. TupleIndex is, also, an interface.
There is only one meaningful heirarchy of descendents from TupleIndex that results in something which can lock a file, which leads us to TupleIndexRecord#close(). We can then follow a particular implementation of RangeIndex called BPlusTree all the way down until we see actual ownership of the MappedByteBuffer
Ultimately, while reading the implementation of BlockAccessMapped#close(), it seems like the entire heirarchy is closing things properly, down to the final classes, but that this longstanding bug may be the culprit. From the documentation:
once a file has been mapped a number of operations on that file will
fail until the mapping has been released (e.g. delete, truncating to a
size less than the mapped area). However the programmer can't control
accurately the time at which the unmapping takes place --- typically
it depends on the processing of finalization or a PhantomReference
queue.
So there you have it. Despite Jena's best efforts, one cannot yet control when that file will be unmapped in Java. This ends up being the tradeoff for memory-mapped file IO in java.
I'd like to write a Minecraft mod which adds a new type of mob. Is that possible? I see that, in Bukkit, EntityType is a predefined enum, which leads me to believe there may not be a way to add a new type of entity. I'm hoping that's wrong.
Yes, you can!
I'd direct you to some tutorials on the Bukkit forums. Specifically:
Creating a Meteor Entity
Modifying the Behavior of a Mob or Entity
Disclaimer: the first is written by me.
You cannot truly add an entirely new mob just via Bukkit. You'd have to use Spout to give it a different skin. However, in the case you simply want a mob, and are content with sharing a skin of another entity, then it can be done.
The idea is injecting the EntityType values via Java's Reflection API. It would look something like this:
public static void load() {
try {
Method a = EntityTypes.class.getDeclaredMethod("a", Class.class, String.class, int.class);
a.setAccessible(true);
a.invoke(a, YourEntityClass.class, "Your identifier, can be anything", id_map);
} catch (Exception e) {
//Insert handling code here
}
}
I think the above is fairly straightforward. We get a handle to the private method, make it public, and invoke its registration method.id_map contains the entity id to map your entity to. 12 is that of a fireball. The mapping can be found in EntityType.class. Note that these ids should not be confused with their packet designations. The two are completely different.
Lastly, you actually need to spawn your entity. MC will continue spawning the default entity, since we haven't removed it from the map. But its just a matter of calling the net.minecraft.server.spawnEntity(your_entity, SpawnReason.CUSTOM).
If you need a skin, I suggest you look into SpoutPlugin. It does require running the Spout client to join to such a server, but the possibilities at that point are literally infinite.
It would only be possible with client-side mods as well, sadly. You could look into Spout, (http://www.spout.org/) which is a client mod which provides an API for server-side plugins to do more on the client, but without doing something client side, this is impossible.
It's not possible to add new entities, but it is possible to edit entity behaviors for example one time, I made it so that you could tame iron golems and they followed you around.
Also you can sort of achieve custom looking human entities by accessing player entities and tweaking network packets
It's expensive as you need to create a player account to achieve this that then gets used to act as a mob. You then spawn a named entity and give it the same behaviour AI as you would with an existing mob. Keep in mind however you will need to write the AI yourself (you could borrow code straight from craftbukkit/bukkit) and you will need to push the movement and events of this mob to players within sight .. As technically speaking all your doing is pushing packets to the client from the serve on what's actually happening but if your outside that push list nothing will happen as other players will see you being knocked around by invisible something :) it's a bit of a mental leap :)
I'm using this concept to create Npc that act as friendly and factional armies. I've also used mobs themselves as friendly entities (if you belong to a dark faction)
I'd like to personally see future server API that can push model instructions to the client for server specific cache as well as the ability to tell a client where to download mob skins ..
It's doable today but I'd have to create a plugin for the client to achieve this which is then back to a game of annoyance especially when mojang push out a new release and all the plugins take forever to rise with its tide
In all honesty this entire ecosystem could be managed more strategically but right now I think it's just really ad hoc product management (speaking as a former product manager of .net I'd love to work on this strategy it would be such a fun gig)
I'm after some advice on polling an external web service every 30 secs from a Domino server side action.
A quick bit of background...
We track the location of cars thru the TomTom api. We now have a requirement to show this in our web app, overlayed onto a map (google, bing, etc.) and mashed up with other lat long data from our application. Think of it as dispatching calls to taxis and we want to assign those calls to the taxis (...it's not taxis\ calls, but it is similar process). We refresh the dispatch controllers screens quite aggressively, so they can see the status of all the objects and assign to the nearest car. If we trigger the pull of data from the refresh of the users screen, we get into some tricky controlling server side, else we will hit the max allowable requests per minute to the TomTom api.
Originally I was going to schedule an agent to poll the web service, write to a cached object in our app, and the refreshing dispatch controllers screen pulls the data from our cache....great, except, user requirement is our cache must be updated every 30secs. I can create a program doc that runs every 1 min, but still not aggressive enough.
So we are currently left with: our .net guy will create a service that polls TomTom every 30secs, and we retrieve from his service, or I figure out a way to do in Domino. It would be nice to do in Domino database, and not some stand alone java app or .net, to keep as much of the logic as possible in one system (Domino).
We use backing beans heavily in our system. I will be testing this later today I hope, but would this seem like a sensible route to go down..?:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
...or are their limitations I am not aware of, has anyone tackled this before in Domino or have any comments?
Thanks in advance,
Nick
Check out DOTS (Domino OSGi Tasklet Service): http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=OSGI%20Tasklet%20Service%20for%20IBM%20Lotus%20Domino
It allows you to define background Java tasks on a Domino server that have all the advantages of agents (can be scheduled or triggered) with none of the performance or maintenance issues.
If you cache the data in a bean (application or session scoped). Have a date object that contains the last refreshed date. When the data is requested, check last cached date against current time. If it's more than/equal to 30 seconds, refresh data.
A way of doing it would be to write a managed bean which is created in the application scope ( aka there can only be one..). In this managed bean you take care of the 30sec polling of the webservice by good old java webservice implementation and a java thread which you start at the creation of your managed-bean something like
public class ServicePoller{
private static myThread = null;
public ServicePoller(){
if(myThread == null){
myThread = new ServicePollThread();
(new Thread(myThread)).start());
}
}
}
class ServicePollThread implements Runnable(){
private hashMap yourcache = null;
public ServicePollThread(){
}
public void run(){
while(running){
doPoll();
Thread.sleep(4000);
}
}
....
}
This managed bean will then poll every 30 seconds the webservice and save it's findings in a hashmap or some other managed-bean classes. This way you dont need to run an agent or something like that and you achieve when you use the dispatch screen to retrieve data from the cache.
Another option would be to write an servlet ( that would be possible with the extlib but I cant find the information right now ) which does the threading and reading the service for you. Then in your database you should be able to read the cache of the servlet and use it wherever you need.
As Tim said DOTS or as jjtbsomhorst said a thread or an Eclipse job.
I've created a video describing DOTS: http://www.youtube.com/watch?v=CRuGeKkddVI&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=4&feature=plcp
Next Monday I'll publish a sample how to do threads and Eclipse jobs. Here is a preview video: http://www.youtube.com/watch?v=uYgCfp1Bw8Q&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=1&feature=plcp