How can I use RabbitMQ user access management in iAPC? - rabbitmq

I'm setting up a new RabbitMQ service in iAPC (Swisscom app cloud) and I need to control the user access of the different producer/consumer application.
My access control requirement:
Application A can only write to queue X.
Application B can only read from queue X.
RabbitMQ provides usually user management functionalities. However, the whole user management in the admin section, RabbitMQ management GUI, is not available.
What solution does exist in iAPC to manage read/write permissions for different applications which have an app binding?
Is it even possible to setup different users?

I believe there is no way to add additional users in these managed RabbitMQ service deployments provided by Swisscom. This is quite similar across all of the available shared services (e.g. ElasticSearch or MariaDB) which come with a preset of defined users. I assume that this is true because those are actually shared services (as opposed to dedicated ones), where there may be authentication / security concerns if you are allowed to administer existing users.
For anyone who is interested the way to access your RabbitMQ CloudFoundry service admin interface via the provided environment parameters to see what is possible:
bind your RabbitMQ service to a running app instance (e.g. MY-APP)
look at the environment of that app with cf env MY-APP
tunnel the RabbitMQ management port to your localhost:
cf ssh -N -T -L 15000:rabbitmq.service.consul:15672 MY-APP
open a webbrowser and look at http://localhost:15000
Use the Username and Password you found in step (2) under rabbitmqent > credentials > management to log in

Related

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

Free device logs for analysis

I am setting up HOME SIEM lab using SPLUNK. I am looking for sources which can provide different logs for various devices but not limited for below ones.
Windows Logs
IIS Logs
IDS/IPS Logs
Based on the logs i am planning to build search queries for various events and further using the same to build the rules.
It is not clear why you need logs when you can generate these? For example you can set up a VM with Windows Server and install an agent like NXLog (or any log collection agent that can send logs forwarded via TCP, UDP, TLS, or HTTP) for log collection to Splunk.
Checkout the Montgomery County Data Portal. It's free
https://data.montgomerycountymd.gov/
You could also connect to a crypto exchange API and have lots of data flow in real-time

Verify Load balancing Azure Container Service

I am using the Azure Container Service with Kubernetes orchestrator and have an app deployed on a cluster with 3 nodes. It has 5 replicas. How can I verify load balancing in action e.g. I want to be able to see that every time I hit the external IP I am being routed to perhaps a different node. Thanks.
The simplest solution is to connect (over ssh for example) to 3 nodes and run WinDump there. In order everything is working properly you will be able to see what happens on every node.
Also here is Microsoft documentation for testing a load balancer:
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-load-balancer#test-load-balancer
The default Load Balancer which are available to your Windows Azure Web and Worker roles are software load balancers and not so much configurable however they do work in Round Robin setting. If you want to test this behavior this is what you need to do:
Create two (or more) instances of your service with RDP access
enabled so you can RDP to both instances
RDP to your both instances and run NETMON or any network monitor
solution in it.
Now access your Windows Azure web application from your desktop You
need to understand that when a network connection is made from your
desktop the connection is still alive based on network settings
(default 60 seconds) so you need to wait until default timeout is
passed to access your Windows Azure web application again.
When you will access your Windows Azure Web application again you can
verify that seconds time the request went to next instance. BE sure
to pass the connection timeout otherwise your request will be keep
handled by same instance.
Note: If you dont want to use RDP, you sure can also create a test ASP.NET page to write some special code based on your specific instance which will show you that this page is specific to certain instance. The best way to do is to read the Instance ID as below:
int instanceID = RoleEnvironment.CurrentRoleInstance.Id;
If you want to have more control over Windows Azure Load Balancing, i would suggest using the Windows Azure Traffic Manager which will help you to route the traffic to your site via Round-Robin, Performance or backup based scenario. More info on using Traffis Manager is in this article.

How to: can I test an application using both worker role AND VM role in Azure emulator?

I've looked but can't see an answer to this one:
I have an application that passes Azure messages between a VM role and a worker role. Before I load this into Azure I'd like to test that both work correctly by using the Azure emulator.
Does anyone know if the Azure emulator will accept messages that originate from the VM role and will it allow me to send messages to the VM? Is there a workaround or solution to this?
Both the emulator and the VM will be running on the same host server in my case.
The queues are accessed as HTTP endpoints, so you need to ensure that both components you want to test can access the queue.
If you want to test your application using the storage emulator (an HTTP endpoint provisioned on your local machine, normally http://127.0.0.1:1001/) then you will to ensure that the VM role can get to that address.
I would recommend testing with the real storage service. There are difference between the emulator and the actual service, so it's better to test the real deal (you can always create a test queue).
In this case the endpoint will be on the internet (i.e. http://myaccount.queue.core.windows.net/).

Windows Service Container

For my projects I need quite often to create windows services.
I need them for scheduling operations, file system watching, asynchronous or long running side tasks (backup files, sending messages, check incoming mail to process, notifications etc).
I also use them to expose WCF services that are cross applications in the enterprise.
The self hosted scenario seems to me more appropriate as we are still on II6 that is quite limited (only http) for exposing WCF.
Most of) the services need also to expose some kind of administration interface (web or desktop) for reporting, starting and stopping the various services etc.
Seems strange to me that a "host container" that leverages most of these features (host, install new services, remote ui for admin, exposing wcf, scheduling etc) with some kind of mef plugins doesn't already exists.
What are the options if I do not want to start from scratch?
I am a developer for an open source windows service hosting framework called Daemoniq. I understand how installers can be an inconvenience so creating installers on the fly is one of its features. You can download it from http://daemoniq.org
Current features include:
container agnostic service location via the CommonServiceLocator
set common service properties like serviceName, displayName, description and serviceStartMode via app.config
run multiple windows services on the same process
set recovery options via app.config
set services depended on via app.config
set service process credentials via command-line
install, uninstall, debug services via command-line
Please feel free to have a look at it. Code contributions are also welcome =D
Thanks!
There is one host server in development (Microsoft) - codename Dublin.
The possible option would be to create one Windows Service - host application, which will load all of your WCF services and create ServiceHost for each of them (for instance, through reflection).
Having only one windows service would make it easy to administer all service hosts (you wouldn't have to administer windows service, but only in-process hosts).