will singleton pattern work in a load balanced environment - singleton

while reading abt singleton pattern I had following query which I need help with:
assuming a singleton class contains some state / data
it is not an issue coz only a single object will be created
but in case of load balanced environments one object would be created for each server
in that case state / data inconsistency issue will appear
so I want to know if my understanding is correct or not and if it is correct than what is the workaround

Related

Merge two Endeca Servers (Endeca 3.1) into one. Including their current data

Let me explain in more detail:
1st: I'm running endeca 3.1, so Endeca Server here refers to 3.0's Data Domain.
I'm required to use an Endeca Server currently present on Endeca (Downloaded a Demo VM). All the info on it, including, groups, attributes and data, must be merged into out Endeca Server. (It can also be the other way around, i could merge my Endeca Server into this one.)
So far, i've tried to do the following:
1) Clone the Endeca Server
2) use the putCollection sconfig operation to create a collection on it with the same name i have on mine.
3) Load configurations using the LoadCollection & LoadAttributes graphs from OEID POC Template 3.1. I point to the new collection on the Configuration.xls file.
This is where i encounter an issue. The LoadAttributes graph gets a T/O message from the server's WS. Then the config WSDL becomes inaccesible for a while. I can't go beyond this point.
I've been able to load data into the collection, but i need to load the attributes first.
THanks in advance for your replies.
Regards
There are a few techniques.
Have you tried exporting the data domain and then importing it?
You can use the endeca-cmd tools to export to a file, and then import from that file. This would enable you to add 2 datastores into one server.
If you want to combine 2 datastores then that is a different question.
The simplest approach in 3.1 if the data collections are small. Extract then as CSV (via a data-table), convert to XLS and add them via self provisioning into separate collections within a single data store. If you are running in the VM this is potentially the easiest approach.
This can also be done using Integrator.
You don't need to load the attributes unless you are using multi-value types. You can call against the conversation web-service to extract data and then load it using 'bulk-load' I would not worry too much about creating the attributes unless this becomes essential due to their type or complexity. If you cannot call against the conversation web-service, then again extract as csv and load using Integrator.

How to access context after each Entity Framework db migration

When I Add-Migration, I get the appropriate DbMigration class with the Up / Down methods, where I am able to make schema changes and (with the use of the Sql() method) can make data/content changes as well.
I'd like to be able to make content changes per migration using the database context. I understand that I could use the Seed method in a Configuration class, but my understanding is that I can only wire up one Configuration with my initializer.
I'd prefer to have a UpCompleted()/DownCompleted() methods that would provide access to the db context after the migration completed. This would enable writing incremental data/context change "scripts" in a manner that would be less prone to errors than using the Sql() method.
Am I missing something? Is this possible?
Thanks!
That doesn't really work because the context only has your most recent model - which can only be used to access the database once the most recent migration has run (which is effectively what Seed achieves).
For an example of how this idea breaks, if you moved a property from one class to another then seed logic from older migrations would no longer compile. But you couldn't change it to use the new property because the corresponding column wouldn't exist in the database yet.
If you want to write this kind of seed/data-manipulation logic, you need to put it at the end of the Up/Down methods and use the Sql method to perform it using raw SQL.
~Rowan

Deleting rows in datastore by time range

I have a CKAN datastore with a column named "recvTime" of type timestamp (i.e. using "timestamp" as type at datastore_create time, as shown in this link). Example value for this column is "2014-06-12T16:08:39.542000".
I have a large numbers of records in the datastore (thousands) and I would like to delete the rows before a given date in "recvTime". My first thought was doing it using the REST API with the datastore_delete operation using a range filter, but it is not possible as described in the following Q&A.
Is there any other way of solving the issue, please?
Given that I have access to the host where CKAN server is running, I wonder if this could be achieved executing a regular SQL sentence on the Postgresql engine where the datastore is persisted. However, I haven't found information about manipulating the CKAN underlying datamodel in the CKAN documentation, so don't know if this a good idea or if it is risky...
Any workaround or information pointer is highly welcome. Thanks!
You could definitely do this directly on the underlying database if you were willing to dig in there (the structure is pretty simple with tables named after the corresponding resource id). You could even turn this into an API of your own using an extension (though you'd want to be careful about permissions).
You might also be interested in the new support (master only atm) for extending the DataStore API via a plugin in an extension - see https://github.com/ckan/ckan/pull/1725

Should I use Get or Load - nhibernate?

I am wondering which one I should use in this situation. I have a dropdown list that send a value back to the server. The server currently uses load and make the object. It then grabs a value out of and tries to convert it to an enum.
After doing some reading it seems that I should just use Get as I am need to access something out of the object other than the PK.
In general, use Get if you need access to properties other than the Id itself; this makes the intention of your code much clearer and is likely more efficient in the long run. Load is great if you need to setup FK relationships when creating or updating entities without making unnecessary round-trips to the database.
For further reading, check out Ayende's article that describes this in greater detail.
Get and Load are different if lazy loading is enabled.
If you use the method Load, NHibernate does not retrieve the entity from the database, but rather creates a proxy object and the only populated property is the ID.
If you access to an other property, NHibernate will load the entity from the DB.
So in your case the best use should be Get.

Core Data Alternatives on iOS

I've been developing several iOS applications using Core Data and it has been an excellent framework to work with. However, I've encountered an issue whereby we more or less have distributed objects (synced) across multiple platforms. A web/database server backend and mobile devices.
Although it hasn't been a problem until now, the static nature of the data model used by Core Data has me a little stuck. Basically what is being requested is a dynamic forms system whereby forms can be created on a server and propagated to the devices. I'm aware of the technique for performing this with a set number of tables with something like:
Forms table
Fields table
Instance of Forms table
Instance Values table
and just linking everything together. What I'm wondering however is if there is an alternative system to Core Data (something above talking to an SQLite database directly) that will allow for a more dynamic object graph. Even a standard ORM would be good if there are options for modifying the schema at runtime. The main reason I want to go down this route is for performance in the sense that I don't want the instance values table exploding with entries (on the local device or server).
My other option is to have the static schema (object-graph) on the iOS devices but have a conversion layer on the server's side which fetches the correct object, populates the properties and saves it to the correct table. Then when the devices comes to sync, it does the reverse and breaks it down into instances. While this saves the server from having a bloated instance value table, it could still be a problem on the device.
Any suggestions are appreciated.
Using specific tables/entities for forms and fields, and entities for instances of each, is probably what I would recommend. Trying to manipulate the ORM schema on the fly if it's going to be happening frequently doesn't seem like a good idea in general.
However, if the schema is only going to change infrequently, you can probably do it with Core Data. You can programatically create and/or manipulate the NSManagedObjectModel prior to creating a NSManagedObjectContext. You can also create migration logic so data stored in an old model can be preserved when you update the model and need to recreate the context and stores.
These other SO posts may be helpful:
Customize core data model at runtime?
Handling Core Data Model Changes
You need to think carefully about what you are actually modeling.
Are you modeling: (1) the actual "forms" i.e. UI elements, (2) data that might be presented in any number of UI versions e.g. firstName or (3) both?
A data model designed to model forms would have entities like:
Form{
name:string
fields<-->Field.form
}
Field{
width:number
height:number
xPos:number
yPos:number
label:sting
nextTab<-->Field.priorTab
priorTab<-->Field.nextTab
form<<-->Form.fields
}
You would use this to store data about form as displayed in the user interface. Then you would have a separate entities and probably a separate model to store the actual data that would populate the UI elements that are configured by the first data model.
You can use Core Data to modeling anything you just need to know what you are really modeling.