What to call the intermediate layer of the program? [closed] - naming-conventions

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We have a program consisting of three parts. There's the backend which is the NT service handling the requests. Also there's a COM object that implements a predefined interface, is consumed by client software and passes the requests to the service.
Since we need to have both 32-bit and 64-bit versions of the COM object we split it into two parts:
the front end that implements the
predefined interface
the middle layer that implements a
newly introduced intermediate
interface and is hosted in COM+ to
avoid reimplementing everything as
both 32 and 64 bit.
So the front end forwards requests to the intermediate layer, the intermediate layer forwards them to the back end.
The problem is that the front end is the first thing the customers "see" and we don't like to call it "Our Product Front end", but rather just call it "Our Product". We also need to invent a good name for the intermediate layer. What's typically used for the latter?
So far the most suitable match I found in the dictionary is spacer level - concise and somehow reflects what the layer is for. Will that do?

The term applied to the middle tier can be influenced by its role. Typically, it is a controller in an MVC paradigm, a business logic layer, a communication/transport layer or a combination of these.
Terms I've used:
Middle Tier (admittedly a generic cop-out -- could apply to your situation)
Business Logic Layer or Business Objects - not the best fit for your app...
Transport Layer -- seems more apt to your situation, though I'm not convinced "everybody will immediately know what you're talking about" when you introduce this term.
Controller (probably too abstract -- Model-View-Controller)

Could always try going with "Data-Access Layer"

Maybe Translation layer?

We've recently coined the term "Intermediary Layer"

Related

OOP and GUI: what to implement where? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
About six months ago I put on a full-stack developer hat at my job and started working on a tool comprised of a GUI and a database. The client has requested that the tool be written in Python, so I've been working with the company PyQt license to create the interface.
This is the first tool of this kind I've ever created and it's going quite well, but a question that keeps nagging at me as I subclass PyQt's various GUI elements is: "Where should I implement this?"
For example, one of the tool's functions involves giving a user a form to fill out (which is a subclassed GUI element) and then submitting the completed form to be stored in one of the database's tables -- pretty standard stuff. The user fills out the form and upon pressing the "submit" button, the form's fields are validated to ensure they adhere to certain constraints and then a stored procedure is called in the database to submit the form's data. Let's call that function submit().
Obviously there are a myriad of ways to structure the code but the 3 big ones I've been toying with are:
Implement submit() directly in the form's class body as one of its methods
Create functions outside the class and have the class itself call them
Create a "handler" class that receives the form's fields in a signal emitted upon clicking the "submit" button
My question is this: which of them, if any, is the "best" way to do this? When I say "best" I mean which is the most "OOP-ish" in terms of accepted conventions, as well as which of them would be easiest for the programmer who comes after me to read and maintain.
Thanks in advance :)
Think of the different parts of your application as systems, each with their own responsibility. For example, the UI system, the database system, and the system(s) in between that implement the business rules. In each system, have a different version of your business objects that matches the abstraction of the system; for example, a user entry form in the UI system, a user table in the database system, a user model in the business system.
Then, as per your option 3, establish communication between the different systems via messages and signals. You would have to decide on some sort of protocol for the data payload being passed around so that you do not leak abstraction between systems. Data transfer objects are a good way to do that, but you could also send bytes or some textual representation such as JSON.

Suggestion to choose correct WCF Instance Mode [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am Balu from Hyderabad. I have writing WCF Restful services for our Andriod / Iphone developers()They can use JSON format so that we are choosing REST). Actually we can do Mobile Application only one APP that can runs differnt projects. All the projects are dynamically comes from WEBSERVICE to MobileApp. So we can configure all the dynamic data from Web services only.
Only one App can handle 5 projects having totally 100-150 users. so i can write only one service using Factory Reflection methods to load projects dynamically.
Q) I have doubt that for our projects which WCF instance mode is suitable?
By reading WCF instance mode articles i understand that "percall" instance is suitable for our WCF service. Is my guess correctly or not? Please suggest me.
And i have one more doubt that If we are not specify an attribute as serialize then that object will not go through network properly? i have tried without serialization (i.e not mentioned "datamember" for particular property) its going well to Mobile App.
Please clarify my doubts and tell me whih instance mode i have to use?
Which instance is better?
Which ConcurrencyMode is better?
The PerCall instance mode is preferred when you don't need to maintain state between calls for the same client. In other words, your service is stateless. PerInstance is used when you need to maintain some state between calls for a client. And finally, Singleton is used when you need to reference state between multiple clients. Depending on your binding and security settings, you will default to either PerCall or PerInstance. PerCall is ideal because it's easier for you to scale your service if/when you need to.
For your ConcurrencyMode, the default is single threaded. Since you're asking, I would suggest leaving this as the default (generally). However, take a look at the tricky case I talked about here.
The [DataContract] and [DataMember] attributes are not necessary as of .NET Framework 3.5. Prior to that version, you had to be explicit and specify these attributes.

Which should be done first? Domain model or Data Model? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Hi guys let's say I have a new project an inventory system. I will be using Java. I go to my client gather some requirements and after I gather them I will model those requirements. Which should I do first? my class diagrams/domain models? or data model? and why? i would really like you opinion on this. what do you do in the real world in software development?
im using these techs: Java, Hibernate(ORM), Scrum(methodology), postgresql(database)
Don't do either one first. Create a domain (object) model and an ER model in parallel. They should be very similar except that the domain model is concerned with data and behavior while the ER model is concerned only with data.
However you need to be very careful to avoid a pitfall that many practitioners, even experts ones, fall into. That is the confusion between analysis and design. Both your domain model and your ER model should be analysis models and not design models. That means that they describe the problem and the requirements, and not the features you are going to add when you design the solution.
In particular, many of the ER diagrams you see in this forum are really relational data models, even though they use ER notation. And they incorporate design features like foreign keys and don't limit themselves to features that are inherent in the information requirements.
Failure to pin down the requirements fairly precisely before design begins is a major source of failure in large scale projects. In small scale projects, not so much.
My 2 cents...
Data tends to be longer-lived, more stable and ultimately more important than code. So your approach should be data-centric. If you structure and normalize your data properly (and ER diagram is important tool for doing that), the rest will naturally follow.
IMO you should definitely not start thinking about your Data Model first.
The reason is that it's up to your Domain Layer to address all business needs.
Your Domain Layer must be agnostic. It should not be tied to any specific technical implementation nor reference any kind of framework. It should be self contained and work alone. When designing your Domain Layer, do not think about persistence or even the way your data will be displayed. If you need methods to store your data, or methods to gather information from specific UI container like Session, just use Interfaces.
When designing a Data Model, you're tied to the RDBMS you're going to use to store your data. You will think about the way your schema will be structured to store and access data efficiently. But the thing is that the Business doesn't care about how good your queries will perform.
It's always a good thing to defer critical decisions like the UI, frameworks, database and so on, when you can. That way you focus only on business needs.

Any tips for creating a key value store abstraction layer? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
With all the key value data stores out there I have started to create an abstraction layer so that a developer does not have to be tied in to a particular store. I propose to make libraries for:
Erlang
Ruby
Java
.NET
Does anyone have any tips on how I should go about designing this API?
Thanks
First off, and as a general rule for anytime you build "pluggable" abstraction layer, build it to support at least two real implementations to start. Don't build it for just one datastore and try to make it abstracted, because you'd overlook a details that won't plug into another implementation very well. By forcing it to use two seperate implementations, you'll get closer to something that is actually flexible, but you'll have to make further changes to support a third and fourth data store.
Second, don't bother, these things already exist. Microsoft has provided a ton of these for their technologies (ODBC, ADO, ADO.NET, etc), and I'm sure Ruby/Java/etc has several as well. I understand the desire to encapsulate the already existing technology, but the more data stores you need to support, the more complexity you need to build in, and the closer you'll get to ADO.NET (or similar technologies). Companies like MS have spent a ton of money and research on solving this exact problem, and that is what they came up with.
I would strongly recommend checking out Twitter's Storehaus project - this is a key-value store abstraction layer for the JVM and written in Scala, supporting (to date) Memcache, Redis, DynamoDB, MySQL, HBase, Elasticsearch and Kafka.
Storehaus's core module defines three traits:
A read-only ReadableStore with get, getAll and close
A write-only WritableStore with put, putAll and close
A read-write Store combining both
In the Ruby ecosystem, you should check out moneta, which again provides a unified interface to key/value stores. It has a lot more features than Storehaus.

A Software Design Issue: The Router Class [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In subsystem design, I sometimes see software designs that have one high-level class that has only one feature: It routes a call from a client using the class to another a certain class the client would like to use. However, it alone does not have any functionality. Take this scenario:
Say there are five classes in the bowling alley subsystem: An alley, a lane, a bowler, control desk, and a score. Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data (a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore()).
I understand this is a bad design decision, but I'd like to hear real-world examples with consequences you discovered of having this router class (Can also be known as a "middleman"). What issues did you run into as the system you were working on evolved? What arguments would you make to persuade software designers to avoid router classes?
I'd argue that in some designs a router is the preferred design pattern, such as in MVC frameworks to delegate handlers for URLs. In that situation it's really helpful because it provides a very clean separation between what the client "sees" and the actual logic behind it.
Anytime a client outside the subsystem wants any data to display to a user, it would communicate only to the control desk (the router) that would call any of the classes it holds to get the client's requested data
this sounds like the Facade pattern
As for the middleman, in the following example, wouldnt the Lane be the culprit?
a score for example: Client calls control desk with getScore(), which calls a Lane's getScore(), which calls a Bowler's getScore())
simplifying the interface to a subsystem for the benefit of clients outside the subsystem could be considered good design.
The Facade pattern, and the Mediator pattern perform similar tasks to what you are describing. Your use of the Middleman moniker implies the Mediator pattern over the Facade pattern, as a Middleman is responsible for negotiating between two entities with neither entity needing to know the specifics of how to communicate with the other.
You can use either of these patterns to reduce coupling for the client class, which needs to use the system the Mediator or Facade is masking. In the case of the Facade pattern, the intention is to provide a convenient way to interface a system of classes. For the Mediator pattern, the purpose is to abstract the steps required to perform a complex task from the client.
I don't know that routing method calls is always such a bad idea.
It seems that you'd just have the problem associated with any additional layer of abstraction - that the abstraction can break, or that it's one more thing that can potentially misbehave if there's a change made to something underlying.
I've never seen anything that called more than a few layers deep, but I just imagine that adding extra calls would make it more difficult to trace the path information takes, and make troubleshooting more difficult.
One potential problem, though, is if each layer implements its own error handling or retry process, making something that's insignificant at each level overwhelming as a whole. For example, if the Lane makes two attempts to check the bowler's score, and the desk makes 3 attempts to check the score, then a failure of the bowler to return a score will result in 6 queries being made. Add a 30 second timeout at the bowler, and you're suddenly waiting for 3 minutes for what should take 30 seconds.
OldNewThing had an article about an example of this in the Windows OS, and the problems it caused, but now I can't seem to find it.
I think that both ASP.NET MVC and MVP patterns utilize this type of concept where you end up with something simply handling the logic that is executed from one end to the other or on behalf of a lower layer to a higher layer. This certainly makes testing more easy to perform so that in and of itself is a MAJOR benefit. This type of pattern does create some manual or tedeious work in that you could click a button and have it do a task rather than click the button, have something intercept that click, then call into some service managing class that does some work. But on the front of keeping your code clean and readable the more separation there is often times the better.
If you are not a tester or could care less about patterns directly then think of it in another format. You have a link that takes a user to a page. This link is scattered across your site all over the place as the destination is very important or used a lot. The destination changes. This could be a find and replace operation...or you could insert a RedirectService (call it what you will) that when someone clicks a link takes change and directs the clicker to the right location. This allows the location to be defined once in one location and therefore changed once. Find and replace often times changes things that weren't meant too be changed!
No matter how you look at this...separation of concerns is a good think. The UI is one concern. The controller of activities is another concern. The activity itself is yet another concern!