Related
I am wondering if there are any agreed upon naming conventions for intents, events and contexts in Dialogflow.
If there are none, then I would appreciate if you shared your own naming conventions!
I find that it's a bit of a contradiction to say 'it doesn’t really matter as long as it‘s easy to understand for others'. If there were naming conventions, it would be much easier for someone to understand a new Dialogflow bot.
Here's my take:
Intents
I use dots to group intents and imply a hierarchy. The first part of the intent name is ideally just one word that clearly indicates the main subject of the intent. For example
name would be an intent that receives a user's name as an input. name.confirm would be the follow-up intent that receives confirmation of the name. name.confirm.yes would be the intent where the user has given confirmation.
This is in the context of a bot which is gathering contact data so the input function is implied. In a more mixed-type chatbot, you could add the type of intent as a first word to categorize your intents better. E.g. input.name.confirm.yes or FAQ.shipping.overseas or smalltalk.agent.location ('Where are you?').
I use the same approach for fallback intents: fallback.name would be the fallback intent that is triggered when the bot is waiting for the user to input their name but doesn't understand the answer.
Contexts
For contexts I use snake case. For example awaiting_email would be the context that is set when the bot is waiting for the user to input their email address. After I have the email address I would set a context email to carry forward the information so that other intents can use it as a context. Or if I'm collecting several pieces of data about the user, I will set the context user and other intents can access certain parameters e.g. via user.email.
I made a video about the topic as well: https://youtu.be/kgKuS2RJcy4
It's obvious that everyone is coming from a slightly different angle because their area of application is different. I'm sure we'll get to a common standard eventually!
I am going to answer this from perspective of a service based company's project.
In my project, we have used similar naming convention for intents as in-built small-talk intents, because its easy to understand and categorize. like FAQ.Comapny.your_question, Buy.Drinks.coffee etc.
(for some unknown reasons we capitalize the first letter of main categories of intents, in small-talk all letters are lower case as it should be).
For the events we have used similar notation for universal constants, like INVOKE_EVENT.
For parameters and contexts, we used snake_case i.e coffee_cost.
Basically, it doesn't really matter as long as it's easy to understand and replicate. But you should always have a basic structure which you and your whole team follows throughout the project.
There aren't any, unfortunately, and the system is flexible enough that it doesn't matter too much. Pick names that make sense (duh).
Although most of the examples use it, I avoid using a space in the name. I treat them more like function names, so having a space in it breaks my aesthetics.
I tend to group Intents based around what part of the conversation they're working on, which is managed through the use of contexts that are set, and separate the part and subpart designations by dots, so it vaguely looks like package designations. I'll have Intents named something like
calculate.fallback
calculate.number
calculate.operation
fallback
welcome
Where the "calculate" ones all have an Input Context of "calculate".
Most of all, remember that Intents (and thus their names) represent what the user says and not what your code does with that. This is the big way that it differs from a function name.
Honestly, it really doesnt matter! As long as its easy to replicate in code and clear to see/understand for anyone else that might be working on your agent, then anything is fine. Generally though using typical coding notation such as CamelCase is probably not a bad idea.
I'm learning about SQL databases so that I can work on an existing database, and I noticed this naming convention used on a lot of the files.
I'm thinking usp stands for User Stored Procedures, but I'm not entirely certain what parmsel is for. I've tried looking this up and see others using it with the case parmSel...
So what does usp_parmsel stand for?
The files are named like so under Programmability > Stored Procedures in the SQL Object Explorer:
dbo.usp_parmsel_CustomerExists
dbo.usp_parmsel_CustomerReferral
dbo.usp_parmsel_CustomerReceipt
Thanks!
Parmsel stands for parameter selector. You can look up parameter selectors on an engine search machine.
Apparently The PARMSEL field determines the format of the remainder of the area.
There is very vague information about it online.
Maybe this? https://documentation.devexpress.com/#CodeRush/CustomDocument1524
I was finally able to verify 100% with a higher up-- it stands for Parameter Select.
I saw a few places online where this was used, it looks like a rather old naming convention-- which makes sense, I'm working with an old system.
Closed. This question needs to be more focused. It is not currently accepting answers.
Closed 7 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
A REST API can have parameters in at least two ways:
As part of the URL-path (i.e. /api/resource/parametervalue )
As a query argument (i.e. /api/resource?parameter=value )
What is the best practice here? Are there any general guidelines when to use 1 and when to use 2?
Real world example: Twitter uses query parameters for specifying intervals. (http://api.twitter.com/1/statuses/home_timeline.json?since_id=12345&max_id=54321)
Would it be considered better design to put these parameters in the URL path?
If there are documented best practices, I have not found them yet. However, here are a few guidelines I use when determining where to put parameters in an url:
Optional parameters tend to be easier to put in the query string.
If you want to return a 404 error when the parameter value does not correspond to an existing resource then I would tend towards a path segment parameter. e.g. /customer/232 where 232 is not a valid customer id.
If however you want to return an empty list then when the parameter is not found then I suggest using query string parameters. e.g. /contacts?name=dave
If a parameter affects an entire subtree of your URI space then use a path segment. e.g. a language parameter /en/document/foo.txt versus /document/foo.txt?language=en
I prefer unique identifiers to be in a path segment rather than a query parameter.
The official rules for URIs are found in this RFC spec here. There is also another very useful RFC spec here that defines rules for parameterizing URIs.
Late answer but I'll add some additional insight to what has been shared, namely that there are several types of "parameters" to a request, and you should take this into account.
Locators - E.g. resource identifiers such as IDs or action/view
Filters - E.g. parameters that provide a search for, sorting or narrow down the set of results.
State - E.g. session identification, api keys, whatevs.
Content - E.g. data to be stored.
Now let's look at the different places where these parameters could go.
Request headers & cookies
URL query string ("GET" vars)
URL paths
Body query string/multipart ("POST" vars)
Generally you want State to be set in headers or cookies, depending on what type of state information it is. I think we can all agree on this. Use custom http headers (X-My-Header) if you need to.
Similarly, Content only has one place to belong, which is in the request body, either as query strings or as http multipart and/or JSON content. This is consistent with what you receive from the server when it sends you content. So you shouldn't be rude and do it differently.
Locators such as "id=5" or "action=refresh" or "page=2" would make sense to have as a URL path, such as mysite.com/article/5/page=2 where partly you know what each part is supposed to mean (the basics such as article and 5 obviously mean get me the data of type article with id 5) and additional parameters are specified as part of the URI. They can be in the form of page=2, or page/2 if you know that after a certain point in the URI the "folders" are paired key-values.
Filters always go in the query string, because while they are a part of finding the right data, they are only there to return a subset or modification of what the Locators return alone. The search in mysite.com/article/?query=Obama (subset) is a filter, and so is /article/5?order=backwards (modification). Think about what it does, not just what it's called!
If "view" determines output format, then it is a filter (mysite.com/article/5?view=pdf) because it returns a modification of the found resource rather than homing in on which resource we want. If it instead decides which specific part of the article we get to see (mysite.com/article/5/view=summary) then it is a locator.
Remember, narrowing down a set of resources is filtering. Locating something specific within a resource is locating... duh. Subset filtering may return any number of results (even 0). Locating will always find that specific instance of something (if it exists). Modification filtering will return the same data as the locator, except modified (if such a modification is allowed).
Hope this helped give people some eureka moments if they've been lost about where to put stuff!
It depends on a design. There are no rules for URIs at REST over HTTP (main thing is that they are unique). Often it comes to the matter of taste and intuition...
I take following approach:
url path-element: The resource and its path-element forms a directory traversal and a subresource (e.g. /items/{id} , /users/items). When unsure ask your colleagues, if they think that traversal and they think in "another directory" most likely path-element is the right choice
url parameter: when there is no traversal really (search resources with multiple query parameters are a very nice example for that)
IMO the parameters should be better as query arguments. The url is used to identify the resource, while the added query parameters to specify which part of the resource you want, any state the resource should have, etc.
As per the REST Implementation,
1) Path variables are used for the direct action on the resources, like a contact or a song
ex..
GET etc /api/resource/{songid} or
GET etc /api/resource/{contactid} will return respective data.
2) Query perms/argument are used for the in-direct resources like metadata of a song
ex..,
GET /api/resource/{songid}?metadata=genres it will return the genres data for that particular song.
"Pack" and POST your data against the "context" that universe-resource-locator provides, which means #1 for the sake of the locator.
Mind the limitations with #2. I prefer POSTs to #1.
note: limitations are discussed for
POST in Is there a max size for POST parameter content?
GET in Is there a limit to the length of a GET request? and Max size of URL parameters in _GET
p.s. these limits are based on the client capabilities (browser) and server(configuration).
According to the URI standard the path is for hierarchical parameters and the query is for non-hierarchical parameters. Ofc. it can be very subjective what is hierarchical for you.
In situations where multiple URIs are assigned to the same resource I like to put the parameters - necessary for identification - into the path and the parameters - necessary to build the representation - into the query. (For me this way it is easier to route.)
For example:
/users/123 and /users/123?fields="name, age"
/users and /users?name="John"&age=30
For map reduce I like to use the following approaches:
/users?name="John"&age=30
/users/name:John/age:30
So it is really up to you (and your server side router) how you construct your URIs.
note: Just to mention these parameters are query parameters. So what you are really doing is defining a simple query language. By complex queries (which contain operators like and, or, greater than, etc.) I suggest you to use an already existing query language. The capabilities of URI templates are very limited...
As a programmer often on the client-end, I prefer the query argument. Also, for me, it separates the URL path from the parameters, adds to clarity, and offers more extensibility. It also allows me to have separate logic between the URL/URI building and the parameter builder.
I do like what manuel aldana said about the other option if there's some sort of tree involved. I can see user-specific parts being treed off like that.
There are no hard and fast rules, but the rule of thumb from a purely conceptual standpoint that I like to use can briefly be summed up like this: a URI path (by definition) represents a resource and query parameters are essentially modifiers on that resource. So far that likely doesn't help... With a REST API you have the major methods of acting upon a single resource using GET, PUT, and DELETE . Therefore whether something should be represented in the path or as a parameter can be reduced to whether those methods make sense for the representation in question. Would you reasonably PUT something at that path and would it be semantically sound to do so? You could of course PUT something just about anywhere and bend the back-end to handle it, but you should be PUTing what amounts to a representation of the actual resource and not some needlessly contextualized version of it. For collections the same can be done with POST. If you wanted to add to a particular collection what would be a URL that makes sense to POST to.
This still leaves some gray areas as some paths could point to what amount to children of parent resources which is somewhat discretionary and dependent on their use. The one hard line that this draws is that any type of transitive representation should be done using a query parameter, since it would not have an underlying resource.
In response to the real world example given in the original question (Twitter's API), the parameters represent a transitive query that filters on the state of the resources (rather than a hierarchy). In that particular example it would be entirely unreasonable to add to the collection represented by those constraints, and further that query would not be able to be represented as a path that would make any sense in the terms of an object graph.
The adoption of this type of resource oriented perspective can easily map directly to the object graph of your domain model and drive the logic of your API to the point where everything works very cleanly and in a fairly self-documenting way once it snaps into clarity. The concept can also be made clearer by stepping away from systems that use traditional URL routing mapped on to a normally ill-fitting data model (i.e. an RDBMS). Apache Sling would certainly be a good place to start. The concept of object traversal dispatch in a system like Zope also provides a clearer analog.
Here is my opinion.
Query params are used as meta data to a request. They act as filter or modifier to an existing resource call.
Example:
/calendar/2014-08-08/events
should give calendar events for that day.
If you want events for a specific category
/calendar/2014-08-08/events?category=appointments
or if you need events of longer than 30 mins
/calendar/2014-08-08/events?duration=30
A litmus test would be to check if the request can still be served without an query params.
I generally tend towards #2, As a query argument (i.e. /api/resource?parameter=value ).
A third option is to actually post the parameter=value in the body.
This is because it works better for multi parameter resources and is more extendable for future use.
No matter which one you pick, make sure you only pick one, don't mix and match. That leads towards a confusing API.
One "dimension" of this topic has been left out yet it's very important: there are times when the "best practices" have to come into terms with the plaform we are implementing or augmenting with REST capabilities.
Practical example:
Many web applications nowadays implement the MVC (Model, View, Controller) architecture. They assume a certain standard path is provided, even more so when those web applications come with an "Enable SEO URLs" option.
Just to mention a fairly famous web application: an OpenCart e-commerce shop.
When the admin enables the "SEO URLs" it expects said URLs to come in a quite standard MVC format like:
http://www.domain.tld/special-offers/list-all?limit=25
Where
special-offers is the MVC controller that shall process the URL (showing the special-offers page)
list-all is the controller's action or function name to call. (*)
limit=25 is an option, stating that 25 items will be shown per page.
(*) list-all is a fictious function name I used for clarity. In reality, OpenCart and most MVC frameworks have a default, implied (and usually omitted in the URL) index function that gets called when the user wants a default action to be performed. So the real world URL would be:
http://www.domain.tld/special-offers?limit=25
With a now fairly standard application or frameworkd structure similar to the above, you'll often get a web server that is optimized for it, that rewrites URLs for it (the true "non SEOed URL" would be: http://www.domain.tld/index.php?route=special-offers/list-all&limit=25).
Therefore you, as developer, are faced into dealing with the existing infrastructure and adapt your "best practices", unless you are the system admin, know exactly how to tweak an Apache / NGinx rewrite configuration (the latter can be nasty!) and so on.
So, your REST API would often be much better following the referring web application's standards, both for consistency with it and ease / speed (and thus budget saving).
To get back to the practical example above, a consistent REST API would be something with URLs like:
http://www.domain.tld/api/special-offers-list?from=15&limit=25
or (non SEO URLs)
http://www.domain.tld/index.php?route=api/special-offers-list?from=15&limit=25
with a mix of "paths formed" arguments and "query formed" arguments.
I see a lot of REST APIs that don't handle parameters well. One example that comes up often is when the URI includes personally identifiable information.
http://software.danielwatrous.com/design-principles-for-rest-apis/
I think a corollary question is when a parameter shouldn't be a parameter at all, but should instead be moved to the HEADER or BODY of the request.
It's a very interesting question.
You can use both of them, there's not any strict rule about this subject, but using URI path variables has some advantages:
Cache:
Most of the web cache services on the internet don't cache GET request when they contains query parameters.
They do that because there are a lot of RPC systems using GET requests to change data in the server (fail!! Get must be a safe method)
But if you use path variables, all of this services can cache your GET requests.
Hierarchy:
The path variables can represent hierarchy:
/City/Street/Place
It gives the user more information about the structure of the data.
But if your data doesn't have any hierarchy relation you can still use Path variables, using comma or semi-colon:
/City/longitude,latitude
As a rule, use comma when the ordering of the parameters matter, use semi-colon when the ordering doesn't matter:
/IconGenerator/red;blue;green
Apart of those reasons, there are some cases when it's very common to use query string variables:
When you need the browser to automatically put HTML form variables into the URI
When you are dealing with algorithm. For example the google engine use query strings:
http:// www.google.com/search?q=rest
To sum up, there's not any strong reason to use one of this methods but whenever you can, use URI variables.
There are a multitude of key-value stores available. Currently you need to choose one and stick with it. I believe an independent open API, not made by a key-value store vendor would make switching between stores much easier.
Therefore I'm building a datastore abstraction layer (like ODBC but focused on simpler key value stores) so that someone build an app once, and change key-value stores if necessary. Is this API too simple?
get(Key)
set(Key, Value)
exists(Key)
delete(Key)
As all the APIs I have seen so far seem to add so much I was wondering how many additional methods were necessary?
I have received some replies saying that set(null) could be used to delete an item and if get returns null then this means that an item doesn't exist. This is bad for two reasons. Firstly, is it not good to mix return types and statuses, and secondly, not all languages have the concept of null. See:
Do all programming languages have a clear concept of NIL, null, or undefined?
I do want to be able to perform many types of operation on the data, but as I understand it everything can be built up on top of a key value store. Is this correct? And should I provide these value added functions too? e.g: like mapreduce, or indexes
Internally we already have a basic version of this in Erlang and Ruby and it has saved us alot of time, and also enabled us to test performance for specific use cases of different key value stores
Do only what is absolute necessary, instead of asking if it is too simple, ask if it is too much, even if it only has one method.
Your API lacks some useful functions like "hasKey" and "clear". You might want to look at, say, Python's hack at it, http://docs.python.org/tutorial/datastructures.html#dictionaries, and pick and choose additional functions.
Everyone is saying, "simple is good" and that's true until "simple is too simple."
If all you are doing is getting, setting, and deleting keys, this is fine.
There is no such thing as "too simple" for an API. The simpler the better! If it solves the need the way it is, then leave it.
The delete method is unnecessary. You can just pass null to set.
Edited to add:
I'm only kidding! I would keep delete, and probably add Count, Contains, and maybe an enumerator (or two).
When creating an API, you need to ask yourself, what does my API provide the user. If your API is so simplistic that it is faster and easier for your client to write their own app, then your API has failed. Ask yourself, does my functionality give them specific benefits. If the answer is no, it is too simplistic and generic.
I am all for simplifying an interface to its bare minimum but without having more details about the requirements of the system, it is tough to tell if this interface is sufficient. Sure looks concise enough though.
Don't forget to document the semantics for "key non-existent" as it isn't clear from reading your API definition above. updated: I see you have added the exists method: is this necessary? you could use the get method and define a NIL of some sort, no?
Maybe worth thinking about: how about considering "freshness" of a value? i.e. an associated "last-modified" timestamp? Of course, it depends on your system requirements.
What about access control? Is it within scope of the API definition?
What about iterating through the keys? If there is a possibility of a large set, you might want to include some pagination semantics.
As mentioned, the simpler the better, but a simple iterator or key-listing method could be of use. I always end up needing to iterate through the set. A "size()" method too, if not taken care of by the iterator. It obviously depends on your usage, though.
It's not too simple, it's beautiful. If "exists(key)" is just a convenient shorthand for "get(Key) != null", you should consider removing it. I guess that depends on how large or complex the value you get() is.
Example case:
We're building a renting service, using SQL Server. Information about items that can be rented is stored in a table. Each item has a state that can be either "Available", "Rented" or "Broken". The different states reside in a lookup table.
ItemState table:
id name
1 'Available'
2 'Rented'
3 'Broken'
Adding to this we have a business rule which states that whenever an item is returned, it's state is changed from "Rented" to "Available".
This could be done with a an update statement like "update Items set state=1 where id=#itemid". In application code we might have an enum that maps to the ItemState id:s. However, these contain hard coded values that could lead to maintenance issues later on. Say if a developer were to change the set of states but forgot to fix the related business logic layer...
What good methods or alternate designs are there for dealing with this type of design issues?
Links to related articles are also appreciated in addition to direct answers.
In my experience this is a case where you actually have to hardcode, preferably by using an Enum which integer values match the id's of your lookup tables. I can't see nothing wrong with saying that "1" is always "Available" and so forth.
Most systems that I've seen hard code the lookup table values and live with it. That's because, in practice, code tables rarely change as much as you think they might. And if they ever do change, you generally need to re-compile any programs that rely on that DDL anyway.
That said, if you want to make the code maintainable (a laudable goal), the best approach would be to externalize the values into a properties file. Then you can edit this file later without having to re-code your entire app.
The limiting factor here is that your app depends for its own internal state on the value you get from the lookup table, so that implies a certain amount of coupling.
For lookups where the app doesn't rely on that code, (for instance, if your code table stores a list of two-letter state codes for use in an address drop-down), then you can lazily load the codes into an object and access them only when needed. But that won't work for what you're doing.
When you have your lookup tables as well as enums defined in the code, then you always have an issue with keeping them in sync. There is not much that can be done here. Both live effectively in two different worlds and are generally unaware of each other.
You may wish to reject using lookup tables and only let your business logic operate these values. In that case you miss the options of relying on referential integrity to back you ap on the data integrity.
The other option is to build up your application in that way that you never need these values in your code. That means moving part of your business logic to the database layer, meaning, putting them in stored procedures and triggers. This will also have the benefit of being agnostic to the client. Anyone can invoke SPs and get assured the data will be kept in the consistence state, consistent with your business logic rules as well.
You'll need to have some predefined value that never changes, be it an integer, a string or something else.
In your case, the numerical value of the state is the state's surrogate PRIMARY KEY which should never change in a well-designed database.
If you're concerned about the consistency, use a CHAR code: A, R or B.
However, you should stick to it as well as to a numerical code so that A always means Available etc.
You database structure should be documented as well as the code is.
The answer depends entirely on the language you're using: solutions for this are not the same in Java, PHP, Smalltalk or even Assembler...
But let me tell you something: while it's true hard coded values are not a great thing, there are times in which you do need them. And this one is pretty much one of them: you need to declare in your code your current knowledge of the business logic, which includes these hard coded states.
So, in this particular case, I would hard code those values.
Don't overdesign it. Before trying to come up with a solution to this problem, you need to figure out if it's even a problem. Can you think of any legit hypothetical scenario where you would change the values in the itemState table? Not just "What if someone changes this table?" but "Someone wants to change this table in X way for Y reason, what effect would that have?". You need to stay realistic.
New state? you add a row, but it doesn't affect the existing ones.
Removing a state? You have to remove the references to it in code anyway.
Changing the id of a state? There is no legit reason to do that.
Changing the name of a state? There is no legit reason to do that.
So there really should be no reason to worry about this. But if you must have this cleanly maintainable in the case of irrational people who randomly decide to change Available to 2 because it just fits their Feng Shui better, make sure all tables are generated via a script which reads these values from a configuration file, and then make sure all code reads constants from that same configuration file. Then you have one definition location and any time you want to change the value you modify that configuration file instead of the DB/code.
I think this is a common problem and a valid concern, that's why I googled and found this article in the first place.
What about creating a public static class to hold all the lookup values, but instead of hard-coding, we initialize these values when the application is loaded and use names to refer them?
In my application, we tried this, it worked. Also you can do some checking, e.g. the number of different possible values of a lookup in code should be the same as in db, if it's not, log/email/etc. But I don't want to manually code this for the status of 40+ biz entities.
Moreover, this can be part of the bigger problem of OR mapping. We're exposed with too much details of the persistence layer, and thus we have to take care of it. With technologies like Entity Framework, we don't need to worry about the "sync" part because it's automated, am I right?
Thanks!
I've used a similar method to what you're describing - a table in the database with values and descriptions (useful for reporting, etc.) and an enum in code. I've handled the synchronization with a comment in code saying something like "these values are taken from table X in database ABC" so that the programmer knows the database needs to be updated. To prevent changes from the database side without the corresponding changes in code I set permissions on the table so that only certain people (who hopefully remember they need to change the code as well) have access.
The values have to be hard-coded, which effectively means that they can't be changed in the database, which means that storing them in the database is redundant.
Therefore, hard-code them and don't have a lookup table in the database. Instead store the items state directly in the items table.
You can structure your database so that your application doesn't actually have to care about the codes themselves, but rather the business rules behind them.
I have done both of the following:
Do one or more of your codes have a certain characteristic, such as IsAvailable, that the application cares about? If so, add it as a flag column to the code table, where those that match are set to true (or your DB's equivalent), and those that don't are set to false.
Do you need to use a specific, single code under a certain condition? You can create a singleton table, named something like EnvironmentSettings, with a column such as ItemStateIdOnReturn that's a foreign key to the ItemState table.
If I wanted to avoid declaring an enum in the application, I would use #2 to address the example in the question.
Whether you take this approach depends on your application's priorities. This type of structure comes at the cost of additional development and lookup overhead. Plus, if every individual code comes with its own business rules, then it's not practical to create one new column per required code.
But, it may be worthwhile if you don't want to worry about synchronizing your application with the contents of a code table.