Is there a way to change the "WSO2 ESB Management Console" header text from "Management Console" to something else for ex "MC PROD 1" or something else.
We would need that sicne we are running several WSO2ESB instances and it's sometimes confusing to know on what server we are logged in to.
For your requirement, you can change the context of the url of each instances. that is in the carbon.xml, default is <WebContextRoot>/</WebContextRoot> . Where you can provide descriptive strings to identify the diffrent nodes.
eg: <WebContextRoot>/esb</WebContextRoot>
<WebContextRoot>/as</WebContextRoot>
Related
In oneM2M, I want to update the MN-CSE configurations by sending the command from IN-CSE to MN-CSE. How can I achieve this?
My Approach: I am thinking of creating an AE on MN-CSE say CONFIG-AE. Every time I want to change anything, I will create a new Content Instance inside the container of CONFIG-AE. The container will have a subscription with the CONFIG-AE resource as the notificationURL. Now when we add new content instance, the request will be redirected to the POA (point of access) of CONFIG-AE. The POA will be basically an IPE implementation which will further process the action. Is the approach correct?
CONFIG-AE (POA=an IPE implementation)
|
|
--------Container
|
|
--------Subscription (notificationURL=path of CONFIG-AE)
Thanks in advance.
Your approach would work. Any AE that has the permissions to create a content instance under the container can set configuration data this way. The CONFIG-AE in your example would then need to apply the new configuration accordingly. I am not sure, though, why this AE would be an IPE? With what would it provide interworking functionality?
Nevertheless, you should also have a look at TS-0001, clause 10.2.8 "Device management" and the whole of TS-0022, "Field Device Configuration". Here, oneM2M specifies specific management resources to manage nodes in a oneM2M deployment. This might look like an overkill at first but since the resource types defined there are well aligned with other management technologies this might be worth the efforts.
Depending on your infrastructure you might also want to look at TS-0005 "Management Enablement (OMA)" and TS-0006 "Management Enablement (BBF)" in case you are working with remote management technologies from OMA or BBF.
Lemme come straight into this.
Well, I have implemented Nifi to localhost. It's working well and everything seems to be perfect.
I have made many different flows with headers of course within the cluster as below.
Cluster
When I right click the header and go to "View configuration" go to "Properties" will see as follows.
Processor details
You can see the "Listening Port" that is 10004 and a "hostname" as well. Then there is "Allowed path" as can be seen.
Now If I want to access this specific header I have to hit using 10.0.0.18:10004/spec/transform.
Now the issue is, I have many different headers which are having a different listening port that is assigned by me. NIFI is not allowing me to assign the same port for every flow I make. but I have to assign different port every time I make a new flow. I just want to assign port 10004 to every other flow and just differ them using the "Allowed path".
How come I make this possible. I have to always assign new port to every new flow. Is there a way to do that. Hope you guys understand what am I actually willing to have. Hope to have your answers soon.
Thank you
You can have one HandleHttpRequest at the beginning of your flow listening on port 10004, and set the "Allowed Paths" property to a regular expression that matches all of the paths you want to support. HandleHttpRequest will add the path as an attribute to each flow file named "http.context.path", so you could then use a RouteOnAttribute to route each path to a different part of the flow.
As Bryan Bende
but in nifi 1.14.0 that is attribute: http.request.uri
I am trying to access the server address space and I am getting this Error.
LabVIEW: (Hex 0xFFFA8EBB) The node path refers to a node that does not exist in the server address space
The server is on a Plc and I am connected via Lan. the information i have is
Server-URL: opc.tcp://192.168.1.135:4840
Namespace-URI: urn:B&R/pv/
I have tried different things but i am not sure how to access the variables in address space. any suggestions would be helpful
B&R Publishes the Endpoints of your data in a fairly consistent manner. If you use a OPC UA browsing tool, you will find that the address space visible to Labview should start with
PLC.Modules.<Default>
B&R Automation Studio requires that you complete the default OPC UA configuration. Within that configuration you would need to enable the nodes/endpoints in question. You can then access these nodes in Labview.
You should check the following:
Under your controller, confirm that you have enabled OPC UA in the
configuration view.
Next, check that you have added a OPC UA Default View File to your
configuration for the hardware you are running.
Finally, in that file, ensure that you have enabled the endpoint/variable and that
it has at least the read permission. The quickest and most expedient
way is to ensure that you have gone to the top level of the OPC UA
Default View File and added the Everyone role and that Read is
enabled. This will cascade down to all enabled endpoints.
Save this and make sure it has been built and added to your controller. You should be able to access endpoints then.
For example, if I have a program called "LampController" running in B&R with a variable called switchState it would be addressed by:
PLC.Modules.<Default>.LampController.switchState
You need to use %26; in place of an ampersand. The ampersand is used to delimit the URI from the query segment. It's pretty unusual to even have an ampersand in a URI. Are you sure you typed it right?
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
Im new at Mule. I saw some examples of people who had all the information stored in this part of the message called 'payload', Mule Doc is not the greatest so i hope you can explain me this.
I have seen that they recive the information and access to it this way
#[payload.age]
but in my case (just trying) I discovered that my information (that comes from the http POST request ) is accesible this way
#[message.inboundProperties.age]
What's the diferece? im always getting a payload with information that i dont know anything about.
in this image i show you a simple flow of my App, just as example.
Ty
Actually, the documentation for Mule is now very solid.
Here is a great explanation of the structure of a Mule message: http://www.mulesoft.org/documentation/display/current/Mule+Message+Structure
You can't really use Mule without having some fundamentals: I strongly suggest you got through these first steps http://www.mulesoft.org/documentation/display/current/Mule+Fundamentals