Is it possible to create PointToPointDumbbell in ns3 but have different applications on the left nodes - ns-3

Is it possible to create PointToPointDumbbell in ns3 but have different applications on the left nodes such as OnOffApplication, UdpClientServer, and so on

Yes, this is possible.
Begin by creating the dumbbell topology using PointToPointDumbbellHelper. Next, use the GetLeft(uint32_t) and GetRight(uint32_t) to get the individual Nodes on each side of the dumbbell.
Simply install the application you want on that Node using Node::InstallApplication(Ptr<Application>) or using OnOffHelper::Install(Ptr<Node>) (or the Helper for whatever application it is you want to install).

Related

how to connect multiple Parse servers to the same mongodb?

I would like to have two separate Parse servers (configured with a different app ID) connect to the same mongodb, so they can see the same set of users, so that I can create 2 different apps that share the same userbase.
Is this something Parse would support? Are there any expected conflicts or config caveats? I was unable to find info about this on Parse's github..
thanks
There's nothing to do, besides setting the database URL option to the same value on both servers, and that your database is accessible from both servers.
I'm not sure why you would need two different applicationId's as you want the same data and likely, logic running on both apps.
No, Parse Server does not support sharing classes between applications.
What you could do is have one of the instances or maybe a third one handle authentication and store your user information. I am pretty sure this would mean you will have to manually set user info on your requests and objects to save on the other two instances.
Another option is for each of the instances have an afterSave hook on the user class that saves and updates the info at the other instance. This seems easier to do and maintain.
I would choose the second option.

Adding auxiliary DB data during deployment

My app consists of two containers: the app itself and a database. I'm planning to wrap the app into a chart, thus paving a way for easy reproducible deployment.
Apart from setting/reading environment envs (which helm+kubernetes seems to handle really well), part of app's configuration is:
making sure the database is pre-filled with special auxiliary data (e.g. admin user exists, some user role names required to create new users are there, etc.).
I like the idea of having readable yaml files hold the entire configuration in a human readable format. However at a glance it doesn't seem that helm in any way would help with this (DB records) kind of configuration.
That being said, what is the best place to put code/configuration ensuring that DB contains certain auxiliary records? A config yaml file? An container init script, written in bash?
You are right, Kubernetes or Helm cannot help with preparing your pre-filled database records/schema.
You should probably have your application initialize those pre-filled data. If you don't want to put this logic into your application, you can ship an initialization script and configure an init container with Kubernetes.
Kubernetes makes sure every time your application container is restarted, the init container runs first. In the init container, you can execute a bash/python/... script that makes sure the records you want are there.

Reading different properties for different cluster/node

I have developed a hybrid worklight app and everything is set up. Now my case is that I have a load balance and two clusters. These two clusters have been synchronized with only one WAR file. Due to some reason, we have a server java file in the WAR for sharing some global variables with worklight adapters.
The problem now is that these 2 clusters are working independently (will be redirected by the load balance). The global variables in the JAVA file inside their WAR will not be shared. How can we maintain only one set of global variable in this case?
Or is there any method for the JAVA to read the current cluster detail(for example cluster id or IP address) so that I can write logic to point to different properties in worklight.properties
[PS: not good at English. I will clarify more if you guys don't understand me]
What you actually need here is not to use static variables to share this information.
I suggest using Redis or Memcached (or some other free solution) to share information across the cluster.
A simpler solution (but less efficient) can be using an SQL database to store/load those shared properties. You can actually create a "configuration" adapter (SQL adapter) which will be called by the other adapters to read/write the configuration properties.

Merge two Endeca Servers (Endeca 3.1) into one. Including their current data

Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.

How to setup a project and break it into sub-projects, how to use slick in this setup

This is a brand new project, so I can use the latest version of play.
I am using IntelliJ 13.
So I want to break the models/db/service layer because I will also have a job service (reading messages off a queue for example) that will need this server layer also.
Since slick is outside of play, how do I setup the datasource for this project, keeping in mind I will be connecting to multiple databases.
Do I need to create a custom config file for this?
web-app (play2!)
- service
service (models + dao)
models
dao
jobs (service)
I don't see any examples like this, which I find strange because I think pretty much any project would have to be setup this way in the real world (beyond simple examples).
Can someone show be sample code where things are broken down like this?
This example isn't broken into sub-projects, but it is very split up and would allow you to specify multiple databases.
https://github.com/geigerma/play-cake