How to list users and streams they flow to, within a team/project area? - rtc

I periodically have to generate a List of users and what streams they are flowing to. So :
user1 , stream1 , stream3 , stream4
user1 , stream1 , stream2 , stream4 , stream5
To accomplish this I generate a flow diagram for each stream and note the streams that each user is flowing to.
This is time consuming. Can this task be scripted ? This does not seem to be described in the RTC doc "Getting started with the Jazz SCM command line in Rational Team Concert" : https://jazz.net/library/article/620

The RTC plain Java API could help, as in this thread:
IWorkspaceManager wm = (IWorkspaceManager)teamRepository.getClientLibrary(IWorkspaceManager.class);
IWorkspaceSearchCriteria criteria1 = IWorkspaceSearchCriteria.FACTORY.newInstance();
criteria1.setKind( IWorkspaceSearchCriteria.WORKSPACES );
criteria1.setExactName( INTEGRATION_WORKSPACE );
criteria1.setExactOwnerName(INTEGRATION_WORKSPACE_OWNER);
List<iworkspacehandle> workspaceHandles = wm.findWorkspaces(criteria1, Integer.MAX_VALUE, monitor);
IWorkspaceHandle wh = workspaceHandles.get( 0 );
IWorkspaceConnection workspaceConnection = wm.getWorkspaceConnection(wh, monitor);
IFlowTable flowTable = workspaceConnection.getFlowTable();
IFlowEntry flowEntry = flowTable.getCurrentDeliverFlow();
IFlowNodeHandle streamHandle = flowEntry.getFlowNode();
One you have a IFlowNodeHandle, see this thread:
IWorkspace fetchedFlowNode = (IWorkspace) repo.itemManager().fetchCompleteItem(flowNode, IItemManager.DEFAULT, monitor);
System.out.println(fetchedFlowNode.getDescription());

Related

How can I create reliable flask-SQLAlchemy interactions with server-side-events?

I have a flask app that is functioning to expectations, and I am now trying to add a message notification section to my page. The difficulty I am having is that the database changes I am trying to rely upon do not seem to be updating in a timely fashion.
The html code is elementary:
<ul id="out" cols="85" rows="14">
</ul><br><br>
<script type="text/javascript">
var ul = document.getElementById("out");
var eventSource = new EventSource("/stream_game_channel");
eventSource.onmessage = function(e) {
ul.innerHTML += e.data + '<br>';
}
</script>
Here is the msg write code that the second user is executing. I know the code block is run because the redis trigger is properly invoked:
msg_join = Messages(game_id=game_id[0],
type="gameStart",
msg_from=current_user.username,
msg_to="Everyone",
message=f'{current_user.username} has requested to join.')
db.session.add(msg_join)
db.session.commit()
channel = str(game_id[0]).zfill(5) + 'startGame'
session['channel'] = channel
date_time = datetime.utcnow().strftime("%Y/%m/%d %H:%M:%S")
redisChannel.set(channel, date_time)
Here is the flask stream code, which is correctly triggered by a new redis time, but when I pull the list of messages, the new message the the second user has added is not yet accessible:
#games.route('/stream_game_channel')
def stream_game_channel():
#stream_with_context
def eventStream():
channel = session.get('channel')
game_id = int(left(channel, 5))
cnt = 0
while cnt < 1000:
print(f'cnt = 0 process running from: {current_user.username}')
time.sleep(1)
ntime = redisChannel.get(channel)
if cnt == 0:
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
cnt += 1
ltime = ntime
lmsg_list = msg_list
for i in msg_list:
yield "data: {}\n\n".format(i)
elif ntime != ltime:
print(f'cnt > 0 process running from: {current_user.username}')
time.sleep(3)
msgs = db.session.query(Messages).filter(Messages.game_id == game_id)
msg_list = [i.message for i in msgs]
new_messages = # need to write this code still
ltime = ntime
cnt += 1
yield "data: {}\n\n".format(msg_list[len(msg_list)-len(lmsg_list)])
return Response(eventStream(), mimetype="text/event-stream")
The syntactic error that I am running into is that the msg_list is exactly the same length (i.e the pushed new message does not get written when i expect it to). Strangely, the second user's session appears to be accessing this information because its stream correctly reflects the addition.
I am using an Amazon RDS MySQL database.
The solution was to utilize a db.session.commit() before my db.session.query(Messages).filter(...) even where no writes were pending. This enabled an immediate read from a different user session, and my code commenced to react to the change in message list length properly.

what is correct use of consumer groups in spring cloud stream dataflow and rabbitmq?

A follow up to this:
one SCDF source, 2 processors but only 1 processes each item
The 2 processors (del-1 and del-2) in the picture are receiving the same data within milliseconds of each other. I'm trying to rig this so del-2 never receives the same thing as del-1 and vice versa. So obviously I've got something configured incorrectly but I'm not sure where.
My processor has the following application.properties
spring.application.name=${vcap.application.name:sample-processor}
info.app.name=#project.artifactId#
info.app.description=#project.description#
info.app.version=#project.version#
management.endpoints.web.exposure.include=health,info,bindings
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.security.servlet.SecurityAutoConfiguration
spring.cloud.stream.bindings.input.group=input
Is "spring.cloud.stream.bindings.input.group" specified correctly?
Here's the processor code:
#Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
public Object transform(String inputStr) throws InterruptedException{
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = " I AM [" + inputStr + "] AND I HAVE BEEN PROCESSED!!!!!!!";
log.info("SampleProcessor.transform() incoming inputStr="+inputStr);
return message;
}
Is the #Transformer annotation the proper way to link this bit of code with "spring.cloud.stream.bindings.input.group" from application.properties? Are there any other annotations necessary?
Here's my source:
private String format = "EEEEE dd MMMMM yyyy HH:mm:ss.SSSZ";
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public MessageSource<String> timerMessageSource() {
ApplicationLog log = new ApplicationLog(this, "timerMessageSource");
String message = new SimpleDateFormat(format).format(new Date());
log.info("SampleSource.timeMessageSource() message=["+message+"]");
return () -> new GenericMessage<>(new SimpleDateFormat(format).format(new Date()));
}
I'm confused about the "value = Source.OUTPUT". Does this mean my processor needs to be named differently?
Is the inclusion of #Poller causing me a problem somehow?
This is how I define the 2 processor streams (del-1 and del-2) in SCDF shell:
stream create del-1 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
stream create del-2 --definition ":split > processor-that-does-everything-sleeps5 --spring.cloud.stream.bindings.applicationMetrics.destination=metrics > :merge"
Do I need to do anything differently there?
All of this is running in Docker/K8s.
RabbitMQ is given by bitnami/rabbitmq:3.7.2-r1 and is configured with the following props:
RABBITMQ_USERNAME: user
RABBITMQ_PASSWORD <redacted>:
RABBITMQ_ERL_COOKIE <redacted>:
RABBITMQ_NODE_PORT_NUMBER: 5672
RABBITMQ_NODE_TYPE: stats
RABBITMQ_NODE_NAME: rabbit#localhost
RABBITMQ_CLUSTER_NODE_NAME:
RABBITMQ_DEFAULT_VHOST: /
RABBITMQ_MANAGER_PORT_NUMBER: 15672
RABBITMQ_DISK_FREE_LIMIT: "6GiB"
Are any other environment variables necessary?

Apache Flume with 2 different interceptors on same source

I am trying to add 2 different interceptors on the same source and send the intercepted data to 2 different channels.
But, I was not able to configure the same. Couldn't find any documentation about the same. Also, I am having some issues with the channel selectors. Not sure how to select a channel with the different interceptors.
Here is my code so far:
a1.sources = syslog_udp
a1.channels = chan1 chan2
a1.sinks = sink1 sink2 //both are different kafka sinks
a1.sources.syslog_udp.type = syslogudp
a1.sources.syslog_udp.port = 514
a1.sources.syslog_udp.host = 0.0.0.0
a1.sources.syslog_udp.keepFields = true
a1.sources.syslog_udp.interceptors = i1 i2
a1.sources.syslog_udp.interceptors.i1.type = regex_filter
a1.sources.syslog_udp.interceptors.i1.regex = '<regex_string1>'
a1.sources.syslog_udp.interceptors.i1.excludeEvents = false
a1.sources.syslog_udp.interceptors.i2.type = regex_filter
a1.sources.syslog_udp.interceptors.i2.regex = '<regex_string1>'|'<regex_string2>'
a1.sources.syslog_udp.interceptors.i2.excludeEvents = false
a1.sources.syslog_udp.selector.type = multiplexing
a1.sources.syslog_udp.channels = chan1 chan2
a1.channels.chan1.type = memory
a1.channels.chan1.capacity = 200
a1.channels.chan2.type = memory
a1.channels.chan2.capacity = 200
Seems like there is no straight-forward setup for this.
A work-around for this kind of layout is to have a single/wider channel interceptor in one agent, pipe the output to an avro-sink and setup a new agent for the avro-source and set-up the new channel interceptor on that.

Pentaho Kettle: list of remote Carte objects IDs from Java

I already know how to run and attach a transformation running on a remote Carte server using Java given transformation's Carte Object ID:
KettleEnvironment.init();
TransMeta transMeta = new TransMeta("file.ktr");
Trans trans = new Trans(transMeta);
SlaveServer ss = new SlaveServer("test", IP, PORT, "cluster", "cluster");
TransExecutionConfiguration jec = new TransExecutionConfiguration();
jec.setRemoteServer(ss);
String carteObjectId = trans.sendToSlaveServer(transMeta, jec, null, null);
and
KettleEnvironment.init();
SlaveServer ss = new SlaveServer("test", IP, PORT, "cluster", "cluster");
SlaveServerTransStatus state = ss.getTransStatus(transMetaName, carteObjectId, 0);
List<StepStatus> list = state.getStepStatusList();
However, for a more general (and usable) remote monitoring I need to get the whole list of the Object IDs of the running/run transformations on the remote Carte server. Which methods can I use to get such a list ?
List<SlaveServerTransStatus> transStatus = slave1.getStatus().getTransStatusList();
for(SlaveServerTransStatus transStatu:transStatus){
System.out.println(transStatu.getTransName()+"--"+transStatu.getStatusDescription()+"---"+transStatu.getId());
}

Multiple (4x) SPI device on rasberry pi 2 running windows 10 iot

I had successfully communicate single SPI device (MCP3008). Is that possible running multiple (4x) SPI device on raspberry pi 2 with windows 10 iot?
I'm thinking to manually connect the CS(chip select) line and activate it before calling spi function and in-active it after done the spi function.
Can it be work on windows 10 iot?
How about configure the spi chip select pin? Change the pin number during the SPI initialization? Is that possible?
Any smarter way to use multiple (4 x MCP3008 ) SPI device on windows 10 iot?
(I'm planning to monitor 32 Analogue signal which will be input to my raspberry pi 2 running windows 10 iot)
Thanks a lot!
Of course you can use as many as you want (as many GPIO pins).
You just have to indicate the device to which you are calling.
First, set the configuration of the SPI for example, using chip select line 0
settings = new SpiConnectionSettings(0); //chip select line 0
settings.ClockFrequency = 1000000;
settings.Mode = SpiMode.Mode0;
String spiDeviceSelector = SpiDevice.GetDeviceSelector();
devices = await DeviceInformation.FindAllAsync(spiDeviceSelector);
_spi1 = await SpiDevice.FromIdAsync(devices[0].Id, settings);
You can not use this pin in further actions! So now you should configure the output ports using GpioPin class, which you will use to indicate the device.
GpioPin_19 = IoController.OpenPin(19);
GpioPin_19.Write(GpioPinValue.High);
GpioPin_19.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_26 = IoController.OpenPin(26);
GpioPin_26.Write(GpioPinValue.High);
GpioPin_26.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_13 = IoController.OpenPin(13);
GpioPin_13.Write(GpioPinValue.High);
GpioPin_13.SetDriveMode(GpioPinDriveMode.Output);
Always before transfer indicate device: (example method)
private byte[] TransferSpi(byte[] writeBuffer, byte ChipNo)
{
var readBuffer = new byte[writeBuffer.Length];
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.Low);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.Low);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.Low);
_spi1.TransferFullDuplex(writeBuffer, readBuffer);
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.High);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.High);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.High);
return readBuffer;
}
From: https://projects.drogon.net/understanding-spi-on-the-raspberry-pi/
The Raspberry Pi only implements master mode at this time and has 2 chip-select pins, so can control 2 SPI devices. (Although some devices have their own sub-addressing scheme so you can put more of them on the same bus)
I've successfully used 2 SPI devices in the DeviceTester project and Breathalyzer project within Jared Bienz's IoT Devices GitHub repo.
Notice, that in each project, the SPI interface descriptor is declared explicitly in the ControllerName property for the ADC and Display used in both of these projects. Detailed information around the Breathalyzer project can be found on my blog.
// ADC
// Create the manager
adcManager = new AdcProviderManager();
adcManager.Providers.Add(
new MCP3208()
{
ChipSelectLine = 0,
ControllerName = "SPI1",
});
// Get the well-known controller collection back
adcControllers = await adcManager.GetControllersAsync();
// Create the display
var disp = new ST7735()
{
ChipSelectLine = 0,
ClockFrequency = 40000000, // Attempt to run at 40 MHz
ControllerName = "SPI0",
DataCommandPin = gpioController.OpenPin(12),
DisplayType = ST7735DisplayType.RRed,
ResetPin = gpioController.OpenPin(16),
Orientation = DisplayOrientations.Portrait,
Width = 128,
Height = 160,
};