i am trying to work out a method to have component level variants in RTC.
Requirement is that there should be say at least two lines of development for the component. If there is an bug-fix on the parent line, i need to merge that change into the second line. I could not bring up yet a method using the streams. Any suggestions?
2 lines of developments means two stream.
You can easily add, in the flow target of the repo workspace, another stream.
I would recommend adding the parent stream to the flow target of repo workspace on the second stream.
That means you would accept changes coming from the parent stream (each time you set that stream in the flow target section as the "current" stream).
Once you have accepted those changes (and merged them in your local workspace or sandbox), you set again your second (and "default") stream as the current one, and are ready to deliver the change set you just accepted back to the second stream.
See an illustration in the section "How do I use the "new" method to accept from an integration stream instead of delivering to it?":
Related
Could you explain what is the usage of Modules with select query?
For example if I write (as shown on this page https://cumulocity.com/guides/users-guide/administration/):
select * from MeasurementCreated
Is it useful to get real time notifications by subscribing of the related channel? Is the module reachable by an angularJs Module? Can this module be used in other CEL statements?
Just selecting data without putting it into another stream can make sense in the case you want to make this data available via a real-time channel to some external application (this could be of course AngularJs).
Take a look at this section in the docs: http://cumulocity.com/guides/reference/real-time-statements/#notifications
This very one example though does not make a lot of sense because raw measurement data is already provided on a real-time channel
http://www.cumulocity.com/guides/reference/measurements/#notifications
As for the second part of the question:
Yes it is possible to communicate with other modules within your tenant.
e.g. You can declare some stream in module a and it will be available in module b.
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
I have this requirement : My WL application have a set of static pages that might be updated any time. Originally the source of all static content is a desktop page that will be transformed by xsl to a mobile friendly content. The problem that I don't want to do that on each request (HA requirement).
I want to get some inspiration on how to architect that without using direct update mechanism (don't want the end user to get notified of these updates).
I should note that pages will change rarely every few month maybe.
I'm thinking about 2 ways of doing that :
1- Making the transformation on adapter side and rely on WL caching so that transformation is not made each time (does that exist ?). But how the adapter will get notified of page change and flush the cache ? Should I program some advanced java based adapter ? (Storing in the cache and having a kind of a job that scans every day for content changes ?)
2- Doing it mobile side but I don't know how to get notified of changes !
Is your only problem with Worklight's Direct Update that the user is being notified and is required to explicitly approve the transfer?
In this case why not use the option of Silent Direct Update?
The property you're looking for is updateSliently set to true in initOptions.js.
For this to work it is required, obviously, that connectOnStartup will be set to true as well.
perhaps what is doable is to use an adapter to fetch the HTML (or whatever it is) and save it to the device's local storage and then have the app display this content, this way you do not alter the app's web resources and not trigger Direct Update.
This question is linked to :
Why can't all change sets be found when accepting change sets?
If RTC discovers that by accepting a change set some of the change sets which depend on this change set will not be accepted, is it possible to view what these change sets are ?
It seems very risky to accept a change set and not be 100% sure that the change sets being accepted contain same source code file changes as was intended by developer who delivered the change sets ?
What you can do is adding the repo workspace of the developer having delivered on the source stream.
You would add that repo workspace as a flow target of your own repo workspace (the one you are using for the merge)
By setting that new flow target as current, you can quickly see what change set you are missing.
Of course, the real issue is that what I just proposed doesn't scale: if you have multiple developers having made partial delivers (with missing intermediate change sets), then you would need to have a look in turn to each developer's repo workspaces.
That is why you have since 2007(!) the "blocker" Work Item "Enhancement 24822":
"Confirm Content Accept" dialog should be more specific (offer to fill in gaps when a gap exception occurs)
... but it is still unresolved, even in the latest RTC4.x.
Suppose you have the code of a Cocoa app which logs its own messages through NSlogs and printfs to the Console output. My goal is to redirect all this output into a separate NSWindow in a NSView.
How can I achieve this in a way that
minimizes the amount of code to rewrite
makes it possible to revert back
maximizes the reuse of written code
(the usual software engineering guide lines)?
Soon after your program starts use the dup(2) call to make duplicate File Descriptors for fd 1 (stdout) and 2 (stderr). Save the returned values for later.
Call close(2) on FD 1 and FD 2.
Call openpty(2) twice. The first master returned should be FD 1 (because it is the first avaiable FD), and the second master should be 2. Save the two slave FDs for later. Don't worry about saving the name parameter. Now whenever your program printf(2) to stdout, or NSLogs to stderr the data will be written to your slave FDs.
Now you must choose wether you want to poll the slave FDs or setup a signal when there is data to be read.
For polling use an NSTimer. In your timer use select(2) on your two slave FDs to see if they have data. If they do read(2) it and then output it to your window.
You can also get the two slave FDs to use non blocking IO (use fcntl(2) to F_SETFL the slave FDs to O_NONBLOCK). Then you don't need select(2), you just read(2) and it will return zero if there is nothing to read.
For signaling, use fcntl(2) to F_SETFL the slave FDs to O_ASYNC. Then use signal(3) to install a signal handler for SIGIO. When your signal handler is called use one of the two methods I describe in the polling section.
If at run time you want to discard all these changes and set everything back to normal do this:
Call close(2) on FD 1, and FD 2.
Call dup(2) on the two FDs saved from step 1 in the first section, above. Do the dup(2) in the correct order so stdout uses FD 1 and stderr uses FD 2.
Now anything written to stdout and stderr will go to the original FDs.
How about writing your own log method that informs some controller of the log message (so it can place it into your (NSTextView?) view, then calls NSLog() in turn?
Here is the solution I came up with:
Create an NSWindowController that has a NSTextView as an outlet (let's call this A)
Create a Singleton class (let's call this B) that encapsulates an A object and provides some methods for sending strings to A (which appends it to its NSTextView) by reading a file (which contains all the loggings) using readInBackgroundAndNotify from NSFileHandle. When notified it calls the appending method. It has a method for starting the logging on file as well, which uses freopen(3) to redirect some stream (stderr and stdout atm) to a file in append mode.
In the project just call the starting logging method of B (no needs of instantiation, but I guess it really does not matter) after importing it.
This solution was created considering both Joshua Nozzi's answer and tlindner's one, and combines them. I have and encapsulated solution that respects the three requests in the question (I have to add only a line of code, I can revert back easily and I can use this solution in other apps too). I noticed that maybe sometimes it can be wrong to have an NSWindowController encapsulated this way (whereas all the other ones are managed by some super-controller).
I finally opted for the file solution since it is very easy to implement and more Cocoa-like than tlindner's one. Also it gives the opportunity to have a logging file that persists on the disk. But of course I may have missed something, point that to me in the comments please ^^