AWSDynamoDBObjectMapper or AWSDynamoDB? - objective-c

The AWS documentation is seemingly endless, and different pages tell me different things. For example, one page tells me that AWSDynamoDBObjectMapper is the entry point to working with DynamoDB, while another tells me that AWSDynamoDB is the entry point to working with DynamoDB. Which class should I be using? Why?
EDIT: One user mentioned he didn't understand the question. To be more clear, I want to know, in general, what the difference is between using AWSDynamoDB and AWSDynamoDBObjectMapper as entry points to interfacing a DynamoDB.

Doc links for both:
AWSDynamoDB
AWSDynamoDBObjectMapper
Since both can clearly read, write, query, and scan, you need to pick out the differences. It appears to me that the ObjectMapper class supports the concept of mapping an AWSDynamoDBModel to a DB vs. directly manipulating specific objects (as AWSDynamoDB does). Moreover, it appears that AWSDynamoDB also supports methods for managing tables directly.
I would speculate that AWSDynamoDB is designed for managing data where the schema is pre-defined on the DB, and AWSDynamoDBObjectMapper is designed for managing data where the schema is defined by the client.
All of this speculation aside though, the most important bit you can glean from the documentation is:
Instead of making the requests to the low-level DynamoDB API directly from your application, we recommend that you use the AWS Software Development Kits (SDKs). The easy-to-use libraries in the AWS SDKs make it unnecessary to call the low-level DynamoDB API directly from your application. The libraries take care of request authentication, serialization, and connection management. For more information, go to Using the AWS SDKs with DynamoDB in the Amazon DynamoDB Developer Guide.
I would recommend this approach rather than worrying about the ambiguity of class documentation.

Related

Use AWS Resources or Swagger API Import for API Gateway?

I'm about to set up an AWS API Gateway via Cloudformation and wondering what is the better solution:
should I use the AWS Resources for Resource and Methods or is the better approach to import the well known OpenAPI (Swagger) file we have into the API Gateway Resource?
From my researches I found out that Using swagger has some limitations (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html) but on the other hand its kind of the standard to create APIs.
So going full in on AWS Cloudformation might have some disadvantages I cannot see right now. Thats why I'm asking for experiences from someone who was in the same situation. Grateful for any guidance...
Merci A
Personally I find the best way to develop api gateway resources is using the Serverless Framework its super easy to use and integrates very easily with other AWS services i.e. Lambda.
Also under the hood serverless is just cloudformation templates so its very flexible.
Nowadays you can write your templates using SAM (Serverless Application Model https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html) if you don't like the serverless framework.
Some of the benefits are that you write less cloudformation code and you can locally debug/test your lambdas/step functions.
Here is my two cents on the best practice among your solutions:
While developing products, swagger or apiary are great tools to document your API and quickly mock the API before implementing them. With a mock API and documentation in the hands of (say) a product manager, it becomes easy to get started with a solid plan for development. But tools like swagger can auto-mock an API specification and if you wish to import this specification only to mock an API, then this import feature is a great tool to use, otherwise it is not. Let me explain why.
By importing an API and orchestrating AWS resources directly from swagger, you bring in lots of limitations, the primary one being your development process does not include a framework such as serverless or zappa. This would force us to directly write lambda functions using AWS console or AWS cli, and make the project architecture complicated.
In writing lambda functions without a framework, if we know upfront that our functions would be orthogonal to each other and do not share much common dependencies, then great, this would work, but for any project with (say) a database, functions accessing external API, some endpoints being guarded by an authorizer and having other resources, using a framework is definitely the better option. It is easier to create layers and common code, for example a database wrapper class.
When using any framework, it is better to start with the framework boilerplate and make the implementation to match the documentation. By studying the advantages and limitations of that framework, we can decide if it fits our architecture.
Also, IMO, this method is not widely used, and finding help might be tough later on as the project becomes more complex.
hope this helps.

Service vs Controller vs Provider naming conventions

As i grow in my professional career i consider naming conventions very important. I noticed that people throw around controller, LibraryController, service, LibraryService, and provider, LibraryProvider and use them somewhat interchangeable. Is there any specific reasoning to use one vs the other?
If there are websites that have more concrete definitions that would be great.
In Java Spring, Spring Boot and also .NET you would have:
Repository: persist data in the database and perform SQL queries.
Service: contain most of the business logic
Controller: define REST endpoints, which contains as little logic as possible.
Conceptually this means that the WHAT (functional) is separated from the HOW (technical) as much as possible. The services try to stay technologically neutral. By contrast a controller only wants to define an external contract for communication. And finally the repository only wants to facilitate the access to the database.
Organizing your code in this way keeps the business logic short, clean and maintainable. Unfortunately it is not always easy to keep them separated. e.g. It is tempting to pollute or enrich your objects with meta-data in the form of decorators/annotations. (e.g. database column name and data type).
Some developers don't see harm in this and get away with it. Others keep their objects strictly separated and define multiple sets of objects.
The objects for the database are often referred to as "entities" or "models".
For a REST controller they are often referred to as DTOs which stands for data-transfer-object.
Having multiple objects means that you need Mappers to convert one type of object to another. Some frameworks can do this for you (e.g. MapStruct).
It would be easy to claim that strictness is always a good thing, but it can slow you down. It's okay to strike a balance.
In Node.js, the concepts of controllers and services are identical. However the term Repository isn't used very often. Instead, they would call that a Provider or sometimes they would just generalize Repositories as a kind of Service.
NestJS has stronger opinions about this (which can be a good thing). The naming conventions of NestJS (a Node.js framework) are strongly influenced by the naming conventions of Angular, which is of course a totally different kind of framework (front-end).
(For completeness, in Angular, a Provider is actually just something that can be injected as a dependency. Most providers are Services, but not necessarily. (It could be a Guard or even a Module. A Module would be more like a set of tools/driver or a connector.)
PS: Anyway, the usage of the term Module is a bit confusing because there also are "ES6 modules", which is a totally different thing.)
ES6 and more modern version of javascript (including typescript) are extremely powerful when it comes to (de)constructing objects. And that makes mappers unnecessary.
Having said that, most Node.js and Angular developers prefer to use typescript these days, which has more features than java or C# when it comes to defining types.
So, all these frameworks are influencing each other. And they pretty much all agree on what a Controller and a Service is. It's mostly the Repository and Provider words that have different meanings. It really is just a matter of conventions. If your framework has a convention, then stick to that. If there isn't one, then pick one yourself.
These terms can be synonymous with each other depending on context, which is why each framework or language creator is free to explicitly declare them as they see fit... think function/method/procedure or process/service, all pretty much the same thing but slight differences in different contexts.
Just going off formal English definitions:
Provider: a person or thing that provides something.
i.e. the provider does a service by controlling some process.
Service: the action of helping or doing work for someone.
i.e. the service is provided by controlling some work process.
Controller: a person or thing that directs or regulates something.
i.e. the controller directs something to provide a service.
These definitions are just listed to the explain how the developer looks at common English meanings when defining the terminology of a framework or language; it's not always one for one and the similarity in terminology actually provides the developer with a means of naming things that are very very similar but still are slightly different.
So for example, lets take AngularJS. Here the developers decided to use the term Controller to imply "HTML Controller", a Service to imply something like a "Quasi Class" since they are instantiated with the New keyword and a Provider is really a super-set of Service and Factory which is also similar. You could really program any application using any of them and really wouldn't lose anything much; though one might be a little better than another in certain context, I don't personally believe its worth the extra confusion... essentially they are all providers. The Angular people could have just defined factory, provider and service as a single term "provider" and then passed in modifiers for things like "static" and "void" like most languages and the exact same functionality could have been provided; this would have been my preference, however I've learned not to fight the conventions and terminology of the frameworks your working no matter how strongly you disagree.
Looking myself too for a more meaningful name than Provider :)
And found this useful post
Old dev here that stumbled on this. My opinion and how I’ve seen it used over the last 20 years shows that it varies by language but the Java C# crowd mostly uses them as follows.
A service handles business logic and deals with domain objects. You find services in controllers and other services.
A repository does NOT handle business logic, but instead acts like a pool of domain objects (with helper methods for finding or persisting them. Services often contain repositories. Repositories often contain a context and are responsible for mapping from infrastructure shaped data to domain shaped data if the definitions have drifted apart. Controllers also often contain repositories for crud endpoints.
A context handles infrastructure the domain owns. Most often this is a database, but context means that anything that touches this data does so through (in) this context. A context returns infrastructure shaped data. A repository often contains a context. Context directly in services is sometimes appropriate. Context in controller is a hard no.
A provider provides access to infrastructure some other app owns. Most often these are rest apis, but can also be kafka streams or rpc classes that read data from or push data to someone else. If the source of truth for some of your domain objects fields changes you will probably see a provider next to a context in your repository, and your repository handles insulating the rest of your code from that change. Providers that provide rpc functionality are often found in services. In micro services or gateways or vertical slice architecture you sometimes see providers directly in controllers.
One old guy’s opinion but I hope it helps.

Umbraco Hive and Services Layer

I'm experimenting with the new Umbraco 5 hive, and I'm kinda a bit confused.
I'm plugging in an existing Linq to SQL services layer, which I developed for a webforms site.
I don't know much about the repository pattern, my services handle all connections with the data context, and work very well.
I have made a few repositories that plug in to the hive, and handle conversion of my entities to the Umbraco TypedEntity type.
These repositiories reference my existing services layer, to retrieve, add, update and delete. The services also handle other entity specific functions, which will not be used by the hive.
Now, it's nice to plug in these services, and just reference them in the hive repositories, but it seems I may be doing things the wrong way round, according to the offical repository pattern as I have read about.
I know there's no hard fast rules, but I would appreciate comments on what I'm doing to achieve this functionality.
I've asked this here instead of the Umbraco forum, as I want a wider perspective.
Cheers.
I personally feel that the Hive is overkill. With the ability to use your own classes directly within razor macros, I think the best approach is to forego the hive altogether and simply use your classes. Why would you trade all of the power of your existing service just to make it fit into the hive interface?
If you're writing a library for other Umbraco developers, you may need to do this, but it's my personal opinion that the hive is over-engineered at worst and a layer of abstraction aimed at newish developers at best.
So, if I were to advise you, I would say to consider the more general principles: "Keep It Simple" and "You Aren't Gonna Need It". If the interface they give you offers a tangible benefit, implement it. If not, consider what you really gain for all of that work.

ORM with automatic client-server data syncronization. Are there any ready to use solutions?

Consider the client application that should store its data on remote server. We do not want it to access this data "on-fly", but rather want it to have a copy of this data in local database. So we do not need connection with remote server to use application. Eventually we want to sync local database with remote server. The good example of what I am talking about is Evernote service. This type of applications is very common, for instance, in mobile development, where user is not guaranteed to have permanent Internet connection, bandwidth is limited and traffic can be expensive.
ORM (object relational mapping) solutions generally allows developer to define some intermediate "model" for his business logic data. And then work with it as some object hierarchy in his programming language, having the ability to store it relational database.
Why not to add to an ORM system a feature that will allow automatic synchronization of two databases (client and server), that share the same data model? This would simplify developing the applications of the type I described above. Is there any systems for any platform or language that have this or similar feature implemented?
this links may provide some useful information
http://gwtsandbox.com/?q=node/34
http://www.urielkatz.com/archive/detail/google-gears-orm-v01/
http://www.urielkatz.com/archive/detail/google-gears-orm/
AFAIK, there are no such ORM tools.
It was one of original goals of our team (I'm one of DataObjects.Net developers), but this feature is still not implemented. Likely, we'll start working on database sync this spring, but since almost nothing is done yet, there is no exact deadline for this.
There is at least one Open Source ORM matching your needs, but it is a Delphi ORM.
It is called mORMot, and use JSON in a stateless/RESTless architecture to communicate over GDI messages, named pipes or HTTP/1.1. It is able to connect to any database engine, and embed an optimized SQlite3 engine.
It is a true Client-Server ORM. That is, ORM is not used only for data persistence of objects (like in other implementations), but as part of a global n-Tier, Service Oriented Architecture. This really makes the difference.

Amazon SimpleDB

Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database?
SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application.
Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments.
This is a good analysis of Amazon services from Dare.
S3 handled what I've typically heard described as "blob storage". A typical Web application typically has media files and other resources (images, CSS stylesheets, scripts, video files, etc) that is simply accessed by name/path. However a lot of these resources also have metadata (e.g. a video file on YouTube has metadata about it's rating, who uploaded it, number of views, etc) which need to be stored as well. This need for queryable, schematized storage is where SimpleDB comes in. EC2 provides a virtual server that can be used for computation complete with a local file system instance which isn't persistent if the virtual server goes down for any reason. With SimpleDB and S3 you have the building blocks to build a large class of "Web 2.0" style applications when you throw in the computational capabilities provided by EC2.
However neither S3 nor SimpleDB provides a solution for a developer who simply wants the typical LAMP or WISC developer experience of building a database driven Web application or for applications that may have custom storage needs that don't fit neatly into the buckets of blob storage or schematized storage. Without access to a persistent filesystem, developers on Amazon's cloud computing platform have had to come up with sophisticated solutions involving backing data up manually from EC2 to S3 to get the desired experience.
I just finished writing a library to make porting an app to simpledb in Perl easy, Net::Amazon::SimpleDB::Simple because I found the Amazon client libraries painful. The library isn't on CPAN yet, but it is at http://rjurneyopen.s3.amazonaws.com/SimpleDB/Simple.pm The idea was to make it trivial to stuff hashes in and out of SimpleDB.
I just ported an app to use it. Overall I am impressed with SimpleDB... even inefficient queries take only 2-3 seconds to return. SimpleDB doesn't seem to care about the size of your table, owing to its Erlang/parallel nature. Tablescans are easy for it.
The pain comes from the fact that you can't count, sum or group by. If you plan on doing any of those things... then SimpleDB probably isn't for you. At the moment in terms of functionality it exists somewhere in between memcached and MySQL. You can SELECT ORDER BY LIMIT, which is nice. Its also nice that you don't have to scale it yourself, and its nice that it doesn't care how much you stuff into it. But more advanced operations like analytics are painful at best. You'll have to do your own calculations server side. Its also a big plus that on any computer I can use the simpledb CLI http://code.google.com/p/amazon-simpledb-cli/ to query my data.
There are some confusing 'gotchas.' For instance, attributes can have more than one value, and you have to explicitly set 'replace' when storing items. Also, storing undef or null string results in a library error, instead of deleting that attribute name/value pair or setting it null/empty string.
Learning to think in terms of a largely un-normalized way is a little strange too, which is why I would second the suggestion above that says it is best for new applications. Porting from a SQL app to SimpleDB would be painful because your application logic would have to change. The way you do things is a bit different. The amazon docs are pretty good at explaining this.
All of this is extractable in a library that sits atop SimpleDB, so for your use of SimpleDB you will want to pick a good library... you probably don't want to deal with it directly. There is some work on the PHP side to make things easy, and there is my library. There is a RAILS activesource, but it doesn't seem to do much for you.
All in all its still early in the game, but compared to other APIs (twitter comes to mind), I have to say that the SimpleDB REST API is pretty simple (especially considering that it is XML) and polite to work with. I would recommend it... depending on the requirements of your application and the economics of your use of it. If you're looking to rapidly scale a service that doesn't put a great load on the DB and don't want to bother with a scalable MySQL/memcache combo... then SimpleDB can offer a 'simple' solution for you.
I expect that its features will continue to grow and it will be a good choice for more and more applications that do more complex and interesting things. But right now it is targeted at and appropriate for your typical Web 2.0 service.
We are using SimpleDB almost exclusively for our new projects. The zero maintenance, high availability, no install aspects are just too good. And for your Ruby developers, check out SimpleRecord, an ActiveRecord like interface for SimpleDB which makes it super easy to use.
But do you really need SQL Server? Can't you live with PostgreSQL or MySQL? Both have proven to be ok for most tasks.
Now if you need SQL Server features then you're out of luck.
Another option is to rent a server. How expensive is expensive?
(I've used Amazon S3 to store images for an application, it's ok and works fine, at least for that)
I haven't used SimpleDB, but have been using combination of S3, EC2, and MySQL for our application.
As long as you are willing to use SimpleDB, then you might as well consider using MySQL (which is very scalable, and not that expensive).
On the S3 and EC2 side, it is great in practice as well.
SimpleDB works great for many applications.... if your project will require a lot of analytic reporting, joining, etc, you may consider MySQL or a hybrid-model.
If you go SimpleDB, we've developed Radquery.com for our internal use and opened it up to the public.