How to handle REST Nouns when data can come from two sources? - api

We have product images which are stored on the filesystem under a manufacturer folder and those files are assigned to products in an IMAGE table.
Our developer needs two sets of data from the REST API:
Get all images for a product (comes from IMAGE table)
/api/manufacturer/[:id]/product[:id]/product-image
Get all product images under a manufacturer (comes from file system)
/api/manufacturer/[:id]/product-image
We can't take from the table in the second example because the client wants to keep unused images around for when products come in (products often use the same image as similar products).
Our developer pointed out that this setup is confusing for API users because the data comes from two different sources (IMAGE table vs. File System)
How is this properly done? Two different nouns?

The users of the API should not care about the back-end architecture. It does not matter that the API's look similar (same noun) but the data is coming from different sources. Let the middle layer (or data layer) worry about that and deal with it elegantly. The reason we modulize and layer our architecture is so that things like this can be done without one layer (or module) worrying about it.

Related

How to integrate images folder hierarchical with CSV file?

Probably I could not ask the question in a proper way. But Here is what I want to do.
I have folders in hierarchical manner. Folders within folders. (For example - I am working on a DL project. I have images of blades of a image. Every fan is in one separate room of a house. House has many rooms hence we have assign number to these fans. A Fan has room number, A Blade of fan has three sides. We have assign some characters to these sides of blade also. like this we have images of blade and folders is something like this.
House - Room - Blade side
Now I want to build a front end which has these tabs(House name, room number, blade side etc) on web page . And I want to use local system's or azure storage drive to store the huge image dataset. But I want to use SQL database to store the information (like sides of the blade, room number, fan number etc) in csv format. And use this SQL as backend database for the web page.
When a user enters just any one of these information in the tab on webpage. I want this to give image as output
How to do this?
Thank you very much in advance.

Importing Product in Adobe CQ5

I have a question on how we can import/synchronise products from our back-office to CQ5 front end.
The architecture to be is pretty simple - custom back-office managing all the products( basically it will be the source of truth). CQ5 driven web-site to show search results(driven by Adobe SearchAndPromote) and product details. Purchase transactions will be handled outside of CQ5.
I went through http://dev.day.com/docs/en/cq/current/ecommerce/eCommerce-framework.html and I think have some idea in which direction we should move, but I would like someone to confirm that my understanding is correct.
1) I need to create scheduled job running on Author node that would call back-office and import products as json feed. I use annotation based #Service(Runnable.class) - Is there a way to set it so it rund on Author node only?
2) Create custom service(called my service above) that will actually create all the nodes in crx. If I have desktop and mobile versions of the site do I need to create all those dones twice? Are there any tips on easier way to create those?
3) Let CQ5 replicate those products to publish nodes.
Is there a easier way? I mean if I was using more standard web-app I would have one controller to show product details, two templates(one for mobile, one for desktop) and a service that would call back-office and return details for requested product. But Sling world is very different, and I want to check if I understand it correctly.
Cheers.
Here are some answers:
1) Here is a good article about different configs for different runmodes: http://helpx.adobe.com/cq/kb/RunModeSetUp.html you can create configs for pub and auth runmodes with certain flag your code will look for which will tell whether to execute import or not.
2) It depends. CQ tends to have copies of content for mobile site so it may make sense to do copies of nodes for mobile site but only in case you those nodes are pages (cq:Page and cq:PageContent) you create based on imported data. Otherwise you just need to save imported data somewhere and obtain it at some moment (via JCR queries or methods like .getNode()). In this case of course it makes sense not to copy your data.
3) It depends here as well. I would consider following forces you may have: should imported data be editable? how frequent are updates? how massive are updates? how critical is consistency across pubs? In case updates are not massive, not frequent and consistency matters import to auth followed by replication can work. Also it may be the case if you need to be able to edit imported data. In case updates are massive and/or frequent and consistency across pubs do not matter much (you can afford that some people may see different results from different pubs during import) I'd suggest run import on all pubs at the same time since massive replication of imported data may affect regular page/images replications.
Thanks,
Max.

Core Data Alternatives on iOS

I've been developing several iOS applications using Core Data and it has been an excellent framework to work with. However, I've encountered an issue whereby we more or less have distributed objects (synced) across multiple platforms. A web/database server backend and mobile devices.
Although it hasn't been a problem until now, the static nature of the data model used by Core Data has me a little stuck. Basically what is being requested is a dynamic forms system whereby forms can be created on a server and propagated to the devices. I'm aware of the technique for performing this with a set number of tables with something like:
Forms table
Fields table
Instance of Forms table
Instance Values table
and just linking everything together. What I'm wondering however is if there is an alternative system to Core Data (something above talking to an SQLite database directly) that will allow for a more dynamic object graph. Even a standard ORM would be good if there are options for modifying the schema at runtime. The main reason I want to go down this route is for performance in the sense that I don't want the instance values table exploding with entries (on the local device or server).
My other option is to have the static schema (object-graph) on the iOS devices but have a conversion layer on the server's side which fetches the correct object, populates the properties and saves it to the correct table. Then when the devices comes to sync, it does the reverse and breaks it down into instances. While this saves the server from having a bloated instance value table, it could still be a problem on the device.
Any suggestions are appreciated.
Using specific tables/entities for forms and fields, and entities for instances of each, is probably what I would recommend. Trying to manipulate the ORM schema on the fly if it's going to be happening frequently doesn't seem like a good idea in general.
However, if the schema is only going to change infrequently, you can probably do it with Core Data. You can programatically create and/or manipulate the NSManagedObjectModel prior to creating a NSManagedObjectContext. You can also create migration logic so data stored in an old model can be preserved when you update the model and need to recreate the context and stores.
These other SO posts may be helpful:
Customize core data model at runtime?
Handling Core Data Model Changes
You need to think carefully about what you are actually modeling.
Are you modeling: (1) the actual "forms" i.e. UI elements, (2) data that might be presented in any number of UI versions e.g. firstName or (3) both?
A data model designed to model forms would have entities like:
Form{
name:string
fields<-->Field.form
}
Field{
width:number
height:number
xPos:number
yPos:number
label:sting
nextTab<-->Field.priorTab
priorTab<-->Field.nextTab
form<<-->Form.fields
}
You would use this to store data about form as displayed in the user interface. Then you would have a separate entities and probably a separate model to store the actual data that would populate the UI elements that are configured by the first data model.
You can use Core Data to modeling anything you just need to know what you are really modeling.

Help with structuring a Telerik OpenAccess Domain Model

My company is about to start a new project using Telerik's OpenAccess ORM. This is a new product to us, and the first time we'll be using an ORM for a project instead of a Dataset based approach. We are currently having some disagreement regarding the best way to structure our data layer. Specifically, should we have a single .rlinq file and domain model for the project, or should we have per screen/module .rlinq files that contain only the tables, and the columns from the tables, required for that particular screen/module. To illustrate the latter:
Say we have a Person table, with fields for first name, last name, ssn, birthdate, gender and marital status. In the personal information screen, we need all of these fields, so we include the whole table in the domain model in that .rlinq file. On another screen (with a separate .rlinq file), we only need the person's last name and ssn, so the Person object in that .rlinq file contains only last name and ssn.
The argument for this method has been primarily that we should only select the data that we need for a particular screen, and no more. In our current Dataset based applications, this makes sense. There has also been concern expressed that having unnecessary tables and relationships will cause unneeded data to be loaded even if it is not asked for and lead to network load. The argument against this has been that we're fragmenting the domain model and introducing unnecessary complexity, and that part of the job of the ORM is to manage data fetching with caching and lazy loading. We can't come to an agreement on this, and can't find any conclusive information one way or another, so we're turning to the StackOverflow community for help!
If it matters, we're building a Windows Forms based intranet app, and the data layer will be sitting behind WCF services, and the database will have around 100 tables.
Thank you in advance for your help!
In general it is best to have a solid domain model built up in a single RLINQ file. You can then handle screen concerns by projecting queries into ScreenModels/DTOs as needed.
For Example
Say you have a person object with multiple properties, however, on a particular screen you only want to return first name & last name.
var myUserDto = context.People
.First()
.Select( p => new UserDto { FirstName= p.FirstName,
LastName=p.LastName });
OpenAccess is smart enough to only query for the first/last name in this case. Now when the screen ends up requiring another property available in the person object, you only need to update the dto and LINQ query.
Also, if you plan to use the Data Service Wizard OpenAccess provides, it creates a service per OpenAccessContext. So if you have an RLINQ per entity you will have a service per entity, which would be painful to maintain on the client to say the least. If you hand roll the service layer, you would obviously have a little more control here, but you will still need to constantly remember which OpenAccessContext handles each domain object.
FYI, For a large model it may be helpful to look into the aggregate metadata source OpenAccess provides to help break up large models into manageable pieces.
Hope this helps! :)

Storing default instances of an NSManagedObject in every new file

I have a core data document based application. Part of my model works by having a DeviceType table, and a Devices table with a relation between them. I would like my application to be able to store the list of DeviceTypes separately from each file, and possibly be able to sync that to a server later.
What would be the best way to accomplish this?
Thanks,
Gabe
You're using a lot of database terminology with Core Data. You should break that habit as soon as possible (the reasons why are given in the introductory paragraphs to the Core Data Programming Guide).
I assume your "usually-static" device list is something you want to be able to update as new devices come out? I would actually recommend just storing the list as a PLIST resource in your app bundle and pushing an update to the app when new devices come out (for simplicity). Using a dictionary-based PLIST, your keys can be device IDs and that key can be a simple string attribute of your managed objects. It's perfectly reasonable to look things up outside your Core Data model based on some ID.
If you must update, I'd still include the "default" list with the app (see above) but if a ".devicelist" (or whatever) file is present in the documents folder, use that instead. That way you can periodically check for an updated list and download it to the docs folder if it differs.
If I've misunderstood you, I encourage you to clarify either by editing your question or posting comments.