Dem_NvmBlockID Configuuration - nvm

I am facing issue with Dem Module event related data when i make reset data is not written or read from Nvm i have tried NvMReadAll NvMWriteAll commands but by this i am not able to read or write Event related data like DTC Status mask Extd Data Values
Kindly let me know correct process of storing Dem event related data in Nvm from configuration if any point that need to be checked kindly let me know otherwise please let me know sequence of API so that every time dem events related data automatically updated from ram variable to nvm and from nvm to ram variable

Have you configured primary memory block inside Dem configuration to store the DTC and it's related information ?
If not try to configure block till fee/fls.

First, check in DEM general options parameter DemEventMemoryEntryStorageTrigger and verify if ExtendedData, Snapshot, and StatusByte are put in the RAM buffer block. If this works correctly during runtime, but after reset data does not exist in that RAM block it means there is a problem with ReadAll and WriteAll.
For that check configuration in NvM for blocks used by DEM, whether they have enabled options NvMSelectBlockForReadAll and NvMSelectBlockForWriteAll.

Related

Where to find a thorough list of node_redis commands?

I'm using redis to store the userId as a key and the socketId as the value. What's more important is that the userId doesn't change, but the socketId constantly changes. So I want to edit the socketId value inside redis, but I'm not sure what node_redis command to use. I'm currently just editing by using .set(userId, mostRecentSocketId).
In addition, I haven't found a good node_redis API anywhere with a complete list of commands. I briefly looked at the redis-commands package, but it still doesn't seem to have a full list of complete commands.
Any help is appreciated; thanks in advance :)
The full list of Redis commands can be found at https://redis.io/commands. After finding a proper command it wouldn't be hard to find how is it proxied in binding ("api") you use.
Upd. To make it clear: you have Redis Server, its commands are listed at the doc I provided. Then you have redis-commands - it's a library for working with redis (I called it a "binding"). My point was that redis-commands may have not all the commands that redis-server can handle, and also the names of some commands can differ a bit. Some other bindings can offer slightly different set of commands. So it's better to examine the list of commands that Redis Server handles, and then select a binding that allowes calling that command (I guess all the bindings have set method)

How to display a status depending on the data flow position

Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.

Apache Flume - send only new file contents

I am a very new user to Flume, please treat me as an absolute noob. I am having a minor issue configuring Flume for a particular use case and was hoping you could assist. Note that I am not using HDFS, which is why this question is different from others you may have seen on forums.
I have two Virtual Machines (VMs) connected to each other through an internal network on Oracle Virtual Box. My goal is to have one VM watch a particular directory that will only ever have one file in it. When the file is changed, I wish for Flume to only send only the new lines/data. I want the other VM to receive this data and update/concatenate the data to a single file in a particular directory on it.
So far, I have this process very close to working. Whenever changes are made in VM1, they are updated on VM2. However, the entire file on VM1 is sent to VM2 every time, not the new lines. For example, if I wrote “Test1” and then a while later underneath wrote “Test2” to the file on VM1, on VM2 the output would be:
Test1
Test1
Test2
What I want to see is:
Test1
Test2
I am not sure how to implement this, and am sending this email after thoroughly examining the Flume user guide documentation and most relevant articles on stackoverflow/stackexchange. For your reference, below are the current configurations(they are working in the manner I mentioned above).
VM1 configuration
VM2 configuration
I realize another solution would be to keep the configuration on VM1 and overwrite the file on VM2 everytime new contents are detected. However, I am also unsure how to implement this.
Any assistance you could provide is greatly appreciated!
Use TailDir source provided in Flume.It periodically writes last position read in position file and its more reliable than exec source as even in case of agent crashes or stops for some reason it will start reading from last position saved in the position file.
agent1.sources.src1.type = TAILDIR
agent1.sources.src1.channels = ch1
agent1.sources.src1.filegroups =f1
agent1.sources.src1.filegroups.f1= //path to log file
agent1.sources.src1.maxBackoffSleep = 10000
Set maxBackoffSleep value as per your need it means how much max time agent should wait before polling for changes in log file , when it didnt find any changes in last attempt made.

Tools used to update dynamic properties without even restarting the application/server

In my project I am trying to do the setting where in I can update the dynamic properties in the server/application without even restarting it.
We face this problem that whenever we have to update or change some properties which are dynamic in nature, then every time we have to restart the server/application and this results in unavailability of the server for that time stamp.
I have already found one tool Archaius-ZooKeeper to set it.https://github.com/Netflix/archaius/
We are trying to do it for JBoss servers where we use war file to deploy on server.
Please suggest are there any other method or tool or technology that can be used to set it.
Thanks in advance.
You could consider jRebel, allows you to redeploy your app without any downtime, then you can use jRebel Remoting to redeploy from eclipse to a remote server
You may use Zookeeper. You have to create a Znode and add the properties in the Znode. All your servers/applications should read from this Znode and also put an watch on this Znode for data changes.
Alternately, you may use a database to store the properties along with their modification time. Whenever you change the value of a property, the corresponding modification time is changed. All your applications/servers keep pulling the delta at some intervals (may be 2 secs/ 5 secs etc.).
Or you may have the properties hosted on a web server, or on NFS, or on some distributed cache etc. All your applications/servers keep reading it at some intervals for detecting any changes.
You can use Spring Cloud Zookeeper. I shared a little example here.

Intersystems Cache routine to write process information to a file on local system?

I am interested in creating a routine that would query the currently running cache processes and then write this information to a file. How could this be done in Cache 2008.2?
PERFMON might be what you're looking for. That's app with it's own UI, but you can call it's functions directly too, as an API.
Check the Cache docs for "Cache Monitoring Guide". That will give you links to PERFMON docs, as well as docs for other system monitoring tools.
You might find something useful in the Class Reference, under packages %SYSTEM, %SYS, and %Monitor.
For some process info you might need to shell out to the OS. In that case check into the $ZF function. That will let you invoke os-level commands from within Cache.
Oh, and you might want to consider saving the process data within the Cache DB, rather than dumping it out to a file. That is, create a Persistent Class with Properties corresponding to each process attribute that you want to capture, then write code to create, populate, and save instances of that class, taking the data from PERFMON or whatever other source you choose.
If you do that you can use Cache SQL to generate whatever kind of report you need. (Cache will automatically generate a SQL Table corresponding to your Persistent Class.) Cache supports ODBC, so you can use an external tool like Crystal Reports or Access for that part.
Obviously that will be more work than just echoing data to a file, but some kind of structure will be needed if you're going to do anything interesting with the information.