Parameterized sub-flow - mule

Is it possible to create a parameterized sub-flow in Mule? I have to do the same complex process twice, almost exactly the same, with a few minor differences. Is the best way to do this to set a bunch of flow variables in advance of the flow reference and then use those variables in the flow I want to parameterize, or is there simpler way?

You are in a correct direction. IF you have a same piece of functionality you want to operate with different set's of data, go with sub-flows. The approach you have mentioned works and it is the best possible I can see.

Related

What is the mule recommended method to invoke flows in mulesoft?

I want to understand the best way to invoke flows in mulesoft. Is it using lookup function in dataweave or using flow ref component?
Also, Is there any disadvantage using lookup function going forward? Current mule runtime is 3.8.2.
Use flow references with subflows when possible.
The lookup function will prevent you from previewing your DW expression at design time, it can only invoke flows not subflows and flows use more resources as compared to subflows.
I personally avoid lookup, I only use it when I have no other way.
I would say that I would use when the lookup pattern applies. This mean when you have a big input like a CSV and while mapping it you want to do a lookup on thirdparty system with a field of each entry and you don't want to run out of memory. In this cases I would say the lookup pattern applies perfect.

using monads to do failable transformations in bulk?

Just starting to grok monads. I think in Clojure, so purity isn't terribly important to me.
I have a series of business operations (composable transforms) which may fail. This can be abstracted nicely with error-monad.
Some of the business operations involve database IO, and I need to perform the operations in bulk for speed. each bulk operation acts on a set of independent items, so one failure must not fail the whole set.
should I just think of my bulk transforms as a series of functions on one object (map) done inside something like error monad but acting on independent items in a seq? does seq-monad help me here? how should I be thinking about this? any other ideas?
I don't see any particular benefit in combining this with IO-monad for my database side effects in Clojure, thoughts on this?
It's hard to say what you need exactly, because your business case seems a little involved, but I think you may get some mileage from using the clojure.algo.monads library.
You can create your own variation on a error-monad or maybe-monad that internally deals with batches.

Input Sanitation Best Practices

Our team has recently been working on a logic and data layer for our database. We were not approved to utilize Entity or Linq to SQL for the data layer. It was primarily built by hand. A lot of the SQL is auto generated. An obvious down fall of this is the need to sanitize inputs prior to retrieval and insertion.
What are the best methods for doing this? Searching for terms like insert, delete, etc seems like a poor way to accomplish this. Is there a better alternative?
The best way generally to sanitation is to work like human kidneys do - reject everything by default and pick out what you know is good/safe.
I assume you already use parameters for all SQL queries with external input.
It is also usually good practice to sanitise input primarily as close to UI as possible.

Is this API too simple?

There are a multitude of key-value stores available. Currently you need to choose one and stick with it. I believe an independent open API, not made by a key-value store vendor would make switching between stores much easier.
Therefore I'm building a datastore abstraction layer (like ODBC but focused on simpler key value stores) so that someone build an app once, and change key-value stores if necessary. Is this API too simple?
get(Key)
set(Key, Value)
exists(Key)
delete(Key)
As all the APIs I have seen so far seem to add so much I was wondering how many additional methods were necessary?
I have received some replies saying that set(null) could be used to delete an item and if get returns null then this means that an item doesn't exist. This is bad for two reasons. Firstly, is it not good to mix return types and statuses, and secondly, not all languages have the concept of null. See:
Do all programming languages have a clear concept of NIL, null, or undefined?
I do want to be able to perform many types of operation on the data, but as I understand it everything can be built up on top of a key value store. Is this correct? And should I provide these value added functions too? e.g: like mapreduce, or indexes
Internally we already have a basic version of this in Erlang and Ruby and it has saved us alot of time, and also enabled us to test performance for specific use cases of different key value stores
Do only what is absolute necessary, instead of asking if it is too simple, ask if it is too much, even if it only has one method.
Your API lacks some useful functions like "hasKey" and "clear". You might want to look at, say, Python's hack at it, http://docs.python.org/tutorial/datastructures.html#dictionaries, and pick and choose additional functions.
Everyone is saying, "simple is good" and that's true until "simple is too simple."
If all you are doing is getting, setting, and deleting keys, this is fine.
There is no such thing as "too simple" for an API. The simpler the better! If it solves the need the way it is, then leave it.
The delete method is unnecessary. You can just pass null to set.
Edited to add:
I'm only kidding! I would keep delete, and probably add Count, Contains, and maybe an enumerator (or two).
When creating an API, you need to ask yourself, what does my API provide the user. If your API is so simplistic that it is faster and easier for your client to write their own app, then your API has failed. Ask yourself, does my functionality give them specific benefits. If the answer is no, it is too simplistic and generic.
I am all for simplifying an interface to its bare minimum but without having more details about the requirements of the system, it is tough to tell if this interface is sufficient. Sure looks concise enough though.
Don't forget to document the semantics for "key non-existent" as it isn't clear from reading your API definition above. updated: I see you have added the exists method: is this necessary? you could use the get method and define a NIL of some sort, no?
Maybe worth thinking about: how about considering "freshness" of a value? i.e. an associated "last-modified" timestamp? Of course, it depends on your system requirements.
What about access control? Is it within scope of the API definition?
What about iterating through the keys? If there is a possibility of a large set, you might want to include some pagination semantics.
As mentioned, the simpler the better, but a simple iterator or key-listing method could be of use. I always end up needing to iterate through the set. A "size()" method too, if not taken care of by the iterator. It obviously depends on your usage, though.
It's not too simple, it's beautiful. If "exists(key)" is just a convenient shorthand for "get(Key) != null", you should consider removing it. I guess that depends on how large or complex the value you get() is.

Parsing criteria in NHibernate

Is it possible to parse criteria from string?
You're not giving anyone much to go on, so I'll just have to take a guess at what you're trying to ask...
If you're looking for a simple criteria.Parse("string here"); then no, I don't such a thing exists.
However, the criteria interface lends itself very well to dynamic creation (in fact, that's its intended purpose). As such, yes, you could write a string parser to create ICriteria elements from tokens.
Perhaps if you provide more information on the problem you are trying to solve someone can respond with a better answer.