Partitioned key space for StackExchange Redis - redis

When developing a component that use Redis, I've found it a good pattern to prefix all keys used by that component so that it does not interfere other components.
Examples:
A component managing users might use keys prefixed by user: and a component managing a log might use keys prefixed by log:.
In a multi-tenancy system I want each customer to use a separate key space in Redis to ensure that their data do not interfere. The prefix would then be something like customer:<id>: for all keys related to a specific customer.
Using Redis is still new stuff for me. My first idea for this partitioning pattern was to use separate database identifiers for each partition. However, that seems to be a bad idea because the number of databases is limited and it seems to be a feature that is about to be deprecated.
An alternative to this would be to let each component get an IDatabase instance and a RedisKey that it shall use to prefix all keys. (I'm using StackExchange.Redis)
I've been looking for an IDatabase wrapper that automatically prefix all keys so that components can use the IDatabase interface as-is without having to worry about its keyspace. I didn't find anything though.
So my question is: What is a recommended way to work with partitioned key spaces on top of StackExchange Redis?
I'm now thinking about implementing my own IDatabase wrapper that would prefix all keys. I think most methods would just forward their calls to the inner IDatabase instance. However, some methods would require a bit more work: For example SORT and RANDOMKEY.

I've created an IDatabase wrapper now that provides a key space partitioning.
The wrapper is created by using an extension method to IDatabase
ConnectionMultiplexer multiplexer = ConnectionMultiplexer.Connect("localhost");
IDatabase fullDatabase = multiplexer.GetDatabase();
IDatabase partitioned = fullDatabase.GetKeyspacePartition("my-partition");
Almost all of the methods in the partitioned wrapper have the same structure:
public bool SetAdd(RedisKey key, RedisValue value, CommandFlags flags = CommandFlags.None)
{
return this.Inner.SetAdd(this.ToInner(key), value, flags);
}
They simply forward the invocation to the inner database and prepend the key space prefix to any RedisKey arguments before passing them on.
The CreateBatch and CreateTransaction methods simply creates wrappers for those interfaces, but with the same base wrapper class (as most methods to wrap are defined by IDatabaseAsync).
The KeyRandomAsync and KeyRandom methods are not supported. Invocations will throw a NotSupportedException. This is not a concern for me, and to quote #Marc Gravell:
I can't think of any sane way of achieving that, but I suspect NotSupportedException("RANDOMKEY is not supported when a key-prefix is specified") is entirely reasonable (this isn't a commonly used command anyway)
I have not yet implemented ScriptEvaluate and ScriptEvaluateAsync because it is unclear to me how I should handle the RedisResult return value. The input parameters to these methods accept RedisKey which should be prefixed, but the script itself could return keys and in that case I think it would make (most) sense to unprefix those keys. For the time being, those methods will throw a NotImplementedException...
The sort methods (Sort, SortAsync, SortAndStore and SortAndStoreAsync) have special handling for the by and get parameters. These are prefixed as normal unless they have one of the special values: nosort for by and # for get.
Finally, to allow prefixing ITransaction.AddCondition I had to use a bit reflection:
internal static class ConditionHelper
{
public static Condition Rewrite(this Condition outer, Func<RedisKey, RedisKey> rewriteFunc)
{
ThrowIf.ArgNull(outer, "outer");
ThrowIf.ArgNull(rewriteFunc, "rewriteFunc");
Type conditionType = outer.GetType();
object inner = FormatterServices.GetUninitializedObject(conditionType);
foreach (FieldInfo field in conditionType.GetFields(BindingFlags.NonPublic | BindingFlags.Instance))
{
if (field.FieldType == typeof(RedisKey))
{
field.SetValue(inner, rewriteFunc((RedisKey)field.GetValue(outer)));
}
else
{
field.SetValue(inner, field.GetValue(outer));
}
}
return (Condition)inner;
}
}
This helper is used by the wrapper like this:
internal Condition ToInner(Condition outer)
{
if (outer == null)
{
return outer;
}
else
{
return outer.Rewrite(this.ToInner);
}
}
There are several other ToInner methods for different kind of parameters that contain RedisKey but they all more or less end up calling:
internal RedisKey ToInner(RedisKey outer)
{
return this.Prefix + outer;
}
I have now created a pull request for this:
https://github.com/StackExchange/StackExchange.Redis/pull/92
The extension method is now called WithKeyPrefix and the reflection hack for rewriting conditions is no longer needed as the new code have access to the internals of Condition classes.

Intriguing suggestion. Note that redis already offers a simple isolation mechanism by way of database numbers, for example:
// note: default database is 0
var logdb = muxer.GetDatabase(1);
var userdb = muxer.GetDatabase(2);
StackExchange.Redis will handle all the work to issue commands to the correct databases - i.e. commands issued via logdb will be issued against database 1.
Advantages:
inbuilt
works with all clients
provides complete keyspace isolation
doesn't require additional per-key space for the prefixes
works with KEYS, SCAN, FLUSHDB, RANDOMKEY, SORT, etc
you get high-level per-db keyspace metrics via INFO
Disadvantages:
not supported on redis-cluster
not supported via intermediaries like twemproxy
Note:
the number of databases is a configuration option; IIRC it defaults to 16 (numbers 0-15), but can be tweaked in your configuration file via:
databases 400 # moar databases!!!
This is actually how we (Stack Overflow) use redis with multi-tenancy; database 0 is "global", 1 is "stackoverflow", etc. It should also be clear that if required, it is then a fairly simple thing to migrate an entire database to a different node using SCAN and MIGRATE (or more likely: SCAN, DUMP, PTTL and RESTORE - to avoid blocking).
Since database partitioning is not supported in redis-cluster, there may be a valid scenario here, but it should also be noted that redis nodes are easy to spin up, so another valid option is simply: use different redis groups for each (different port numbers, etc) - which would also have the advantage of allowing genuine concurrency between nodes (CPU isolation).
However, what you propose is not unreasonable; there is actually "prior" here... again, largely linked to how we (Stack Overflow) use redis: while databases work fine for isolating keys, no isolation is currently provided by redis for channels (pub/sub). Because of this, StackExchange.Redis actually includes a ChannelPrefix option on ConfigurationOptions, that allows you to specify a prefix that is automatically added during PUBLISH and removed when receiving notifications. So if your ChannelPrefix is foo:, and you publish and event bar, the actual event is published to the channel foo:bar; likewise: any callback you have only sees bar. It could be that this is something that is viable for databases too, but to emphasize: at the moment this configuration option is at the multiplexer level - not the individual ISubscriber. To be comparable to the scenario you present, this would need to be at the IDatabase level.
Possible, but a decent amount of work. If possible, I would recommend investigating the option of simply using database numbers...

Related

Value object in event sourcing

Is there a place for value objects in an event sourced domain model?
Lets define a value object as an object with immutable state that guards its invariants and has no particular identifier.
An event sourced domain model in this context is a domain that is entirely or partially event sourced, meaning that its current state can be derived from applying all events that have occurred in the past. Events themselves are considered immutable, even over time.
Debate has taken place about the validity of using value objects within events - this question goes slightly further: Do value objects have a place in event sourced domains at all?
The (potential) problem with using value objects is that it becomes rather tricky to alter the domain in such a way that invariants are tightened.
An example of this scenario would be to have a Username value object, with the sole constraint that the name must be anywhere between 2 and 16 characters.
While this has been working well for some time, the business decides to only allow usernames of at least 5 characters.
A migration period begins and users with names of less than 5 characters are asked to update their names.
Lets say the process was successful, correction events are applied and everyone is happy.
We tighten the constraints on our Username value object to require at least 5 characters.
For a while everyone is happy, but then we discover a problem with the snapshots and replay all events.
We now face an exception from our Username object: by loading the historic data, we're breaking an invariant of our domain.
The rules of a value objects apply retroactively - does this make them inherently unsuitable for event sourcing? Would it be worth applying versioning of value objects? Is there a simpler way of avoiding such problems?
I would say, that at the moment you redefined what Username means, and you don't migrate historical data somehow, you've essentially created 2 different Username meanings.
Because there are 2 different meanings of the word, you have to make it explicit in the code somehow. "Versioning" is one way, although I wouldn't use such a generic solution, there are different modeling options.
You could make it explicit that the history of a "username" is just that, a history. So for example create a HistoricUsername, which is the event-sourced object, even a value object if you want. And create a Username which is at all times the username with the most current rules, which is not persisted at all, but created from a HistoricUsername if it can.
Some people suggest sometimes to extract the "rules" from the object, and re-apply it later. That way the object itself is valid at all times and you can ask it to validate itself against rules that might change. I don't really prefer these kinds of solutions, but it's an option, and the Username would still be a value-object.
So the problem is not really that value-objects don't fit into event-sourcing, it's just that the modeling has to be more accurate.
Do value objects have a place in event sourced domains at all?
Yes.
Is there a simpler way of avoiding such problems?
"Don't do that."
The problem you are describing is really one about messaging - if we make backwards incompatible changes to our messages, then things break.
(More precisely, you have a "Username" message, and you are trying to re-use that message with a new set of constraints that reject some previously valid uses of the message).
The answer is that you don't introduce backwards incompatible changes - instead, introduce new names that match the new requirements, and deprecated the old ones.
Which is to say, adding support for new messages, and removing support for the old messages, become two separately managed options.
Greg Young's book Versioning in an Event Sourced System dedicates some chapters to this idea. Also, Rich Hickey ends up touching on these important ideas in most of his talks -- I'd suggest starting from Spec-ulation.
The "value object", meaning that the type that the current implementation of the domain model uses to move the information around, is a separate concern from the messages. The data structures we use in memory don't need to be coupled to our serialization formats.
The representation of the information on the wire is distinct from the representation of information in memory, and that in turn is distinct from the abstractions that manipulate the information in memory.
The challenging thing is that, at the beginning of a project, you have the least amount of information about when the different representations are going to diverge.
We've solved this in a slightly different way. By separating the public API of our value objects from the internal (domain only) API, we are able to evolve one without affecting the other.
For example:
public class Username
{
private readonly string value;
// Domain-only (internal) constructor.
// Does not enforce constriants and can only be called within the domain.
internal Username(string value)
{
this.value = value;
}
// Public factory method.
// Enforces business constraints. Used by consumers of the domain (application layer etc.)
// to create new instances of the value object.
public static Username Create(string value)
{
// Business constraints. These will evolve and grow over time.
if (value == null)
{
// throw exception etc.
}
if (value.Length < 2)
{
// throw exception etc.
}
return new Username(value);
}
}
Consumers of the domain must use the static Create method to create a new instance of the value object. This factory method contains all of our business constraints and prevents an instance being created in an invalid state.
Inside the domain, classes have access to the internal (constraint-less) constructor. Since this does not enforce any business constraints, an instance of the value object can always be created in this way (regardless of its value). By using this constructor when replaying events we can ensure that historical data will always succeed.
The benefits of this design are:
A single class is used to represent the domain concept (no need for multiple classes, versioning etc.).
Business rules are free to evolve over time.
Historical data always works. A Username from a year ago is still a user name, even if our rules have changed.
Although already answered I do find this an interesting situation.
I agree with others that the event data should be record-based and, therefore, nothing more than a data container that may be used to reconstitute the aggregate.
That being said when the rules change so does the domain. A major portion of domain-driven design is to capture as much of the domain (rules/structure) as is required. If this is the case should the changes in the rules not also be kept?
For instance, if we have a Username Value Object and it starts out with the 2 to 16 characters rules then that is coded as such:
public class Username
{
public string Value { get; }
public Username(string value)
{
if (value.Length < 2 || value.Length > 16)
{
throw new DomainException("Username must be between 2 and 16 characters");
}
Value = value;
}
}
Now we get to 1 March 2018 and the rule changes. We can keep the rule around:
public class Username
{
public string Value { get; }
public Username(string value, DateTime registrationDate)
{
if (registrationDate < new Date(2018, 3, 1) &&
(value.Length < 2 || value.Length > 16))
{
throw new DomainException("Username must be between 2 and 16 characters");
}
if (registrationDate >= new Date(2018, 3, 1) &&
(value.Length < 5 || value.Length > 16))
{
throw new DomainException("Username must be between 5 and 16 characters");
}
Value = value;
}
}
That is the basic idea. In this way we keep our "old" rules around as well. This may become quite a hassle but I don't have enough experience to say. Changing our rules retroactively may introduce some pretty tricky situation so I guess one would need to evaluate this on a case-by-case basis.
Just a thought.

Are serializers the right spot to remove shared state from Akka messages?

I am working on a distributed algorithm and decided to use a Akka to scale it across machines. The machines need to exchange messages very frequently and these messages reference some immutable objects that exist on every machine. Hence, it seems sensible to "compress" the messages in the sense that the shared, replicated objects should not be serialized in the messages. Not only would this save on network bandwidth but it also would avoid creating duplicate objects in the receiver side whenever a message is deserialized.
Now, my question is how to do this properly. So far, I could think of two options:
Handle this on the "business layer", i.e., converting my original message objects to some reference objects that replace references to the shared, replicated objects by some symbolic references. Then, I would send those reference objects rather than the original messages. Think of it as replacing some actual web resource with a URL. Doing this seems rather straight-forward in terms of coding but it also drags serialization concerns into the actual business logic.
Write custom serializers that are aware of the shared, replicated objects. In my case, it would be okay that this solution would introduce the replicated, shared objects as global state to the actor systems via the serializers. However, the Akka documentation does not describe how to programmatically add custom serializers, which would be necessary to weave in the shared objects with the serializer. Also, I could imagine that there are a couple of reasons, why such a solution would be discouraged. So, I am asking here.
Thanks a lot!
It's possible to write your own, custom serializers and let them do all sorts of weird things, then you can bind them at the config level as usual:
class MyOwnSerializer extends Serializer {
// If you need logging here, introduce a constructor that takes an ExtendedActorSystem.
// class MyOwnSerializer(actorSystem: ExtendedActorSystem) extends Serializer
// Get a logger using:
// private val logger = Logging(actorSystem, this)
// This is whether "fromBinary" requires a "clazz" or not
def includeManifest: Boolean = true
// Pick a unique identifier for your Serializer,
// you've got a couple of billions to choose from,
// 0 - 40 is reserved by Akka itself
def identifier = 1234567
// "toBinary" serializes the given object to an Array of Bytes
def toBinary(obj: AnyRef): Array[Byte] = {
// Put the code that serializes the object here
//#...
Array[Byte]()
//#...
}
// "fromBinary" deserializes the given array,
// using the type hint (if any, see "includeManifest" above)
def fromBinary(
bytes: Array[Byte],
clazz: Option[Class[_]]): AnyRef = {
// Put your code that deserializes here
//#...
null
//#...
}
}
But this raises an important question: if your messages all references data that is shared on the machines already, why would you want to put in the message the pointer to the object (very bad! messages should be immutable, and a pointer isn't!), rather than some sort of immutable, string objectId (kinda your option 1) ? This is a much better option when it comes to preserving the immutability of the messages, and there is little change in your business logic (just put a wrapper over the shared state storage)
for more info, see the documentation
I finally went with the solution proposed by Diego and want to share some more details on my reasoning and solution.
First of all, I am also in favor of option 1 (handling the "compaction" of messages in the business layer) for those reasons:
Serializers are global to the actor system. Making them stateful is actually a most severe violation of Akka's very philosophy as it goes against the encapsulation of behavior and state in actors.
Serializers have to be created upfront, anyway (even when adding them "programatically").
Design-wise, one can argue that "message compaction is not a responsibility of the serializer, either. In a strict sense, serialization is merely the transformation of runtime-specific data into a compact, exchangable representation. Changing what to serialize, is not a task of a serializer, though.
Having settled upon this, I still strived for a clear separation of "message compaction" and the actual business logic in the actors. I came up with a neat way to do this in Scala, which I want to share here. The basic idea is to make the message itself look like a normal case class but still allow these messages to "compactify" themselves. Here is an abstract example:
class Sender extends ActorRef {
def context: SharedContext = ... // This is the shared data present on every node.
// ...
def someBusinessLogic(receiver: ActorRef) {
val someData = computeData
receiver ! MyMessage(someData)
}
}
class Receiver extends ActorRef {
implicit def context: SharedContext = ... // This is the shared data present on every node.
def receiver = {
case MyMessage(someData) =>
// ...
}
}
object Receiver {
object MyMessage {
def apply(someData: SomeData) = MyCompactMessage(someData: SomeData)
def unapply(myCompactMessage: MyCompactMessage)(implicit context: SharedContext)
: Option[SomeData] =
Some(myCompactMessage.someData(context))
}
}
As you can see, the sender and receiver code feels just like using a case class and in fact, MyMessage could be a case class.
However, by implementing apply and unapply manually, one can insert its own "compactification" logic and also implicitly inject the shared data necessary to do the "uncompactification", without touching the sender and receiver. For defining MyCompactMessage, I found Protocol Buffers to be especially suited, as it is already a dependency of Akka and efficient in terms of space and computation, but any other solution would do.

Design patterns: How can concrete implementations select behaviours at run time?

I'm new to the terminology, so please correct me if I've phrased any part of my question wrong.
Here's the example that I'm thinking of:
A file synchronization program that lets you pair 2 folders together, and specify options such as mirror the two folders, only copy contents one way, etc.
How would I specify at run time how each of these concrete implementations copy the files (eg, different types of encryption).
Here is what I'd somewhat like to accomplish:
http://i.imgur.com/fkVN9.png
Do I have to make concrete implementations for each? ie MirrorAes, MirrorBlowfish, OnewayAes, etc? Is there a better alternative?
Thanks
The way that your diagram is showing it, the way you encrypt appears to be dependent on the way that you do synchronization. I doubt that this is the case (although I may be wrong).
If the way you sync is truly independent of the way you encrypt, switch from inheritance to composition. Make FolderPair an object that has a SyncStrategy and an EncryptionStrategy, like this:
class FolderPair {
URI a, b;
private final SyncStrategy syncStrategy;
private final EncryptionStrategy cryptStrategy;
public FolderPair(
URI a
, URI b
, SyncStrategy syncStrategy
, EncryptionStrategy cryptStrategy) {
...
}
public void sync() {
syncStrategy.synchronize(a, b, cryptStrategy);
}
}
interface SyncStrategy {
void synchronize(URI a, URI b, EncryptionStrategy cryptStrategy);
}
interface EncryptionStrategy {
byte[] encrypt(byte[] data);
}
Now you can configure your FolderPair objects with instances of SyncStrategy and EncryptionStrategy, mixing and matching them without creating combinatorial explosion:
FolderPair p1 = new FolderPair(aUri, bUri, new OneWyaSync(), new AesCrypt());
This design features two applications of the Strategy Pattern - one for the synchronization behavior, and the other one for the encryption.
You've got orthogonal concerns - the sync type and the encryption. One way to approach this is the Strategy Pattern, where your concrete implementations of the synchronization classes aggregate an encryption class, and the synchronizers interact with an encryption interface, allowing "mix and match" encryption and synchronization without having a multiplier effect on the number of classes you write.
You mean, you need an encryption strategy?
Use an abstract factory together with a set of strategies for encryption. In case you have multiple options, use a builder.
Let's say, you have a SHA1Encryption and a DESEncryption. Both implement an interface, say, GeneralEncryptionStrategy, and you have an EncryptionFactory, which takes a string (either "sha1" or "des") as an argument and creates an instance of the corresponding class.

Reference Semantics in Google Protocol Buffers

I have slightly peculiar program which deals with cases very similar to this
(in C#-like pseudo code):
class CDataSet
{
int m_nID;
string m_sTag;
float m_fValue;
void PrintData()
{
//Blah Blah
}
};
class CDataItem
{
int m_nID;
string m_sTag;
CDataSet m_refData;
CDataSet m_refParent;
void Print()
{
if(null == m_refData)
{
m_refParent.PrintData();
}
else
{
m_refData.PrintData();
}
}
};
Members m_refData and m_refParent are initialized to null and used as follows:
m_refData -> Used when a new data set is added
m_refParent -> Used to point to an existing data set.
A new data set is added only if the field m_nID doesn't match an existing one.
Currently this code is managing around 500 objects with around 21 fields per object and the format of choice as of now is XML, which at 100k+ lines and 5MB+ is very unwieldy.
I am planning to modify the whole shebang to use ProtoBuf, but currently I'm not sure as to how I can handle the reference semantics. Any thoughts would be much appreciated
Out of the box, protocol buffers does not have any reference semantics. You would need to cross-reference them manually, typically using an artificial key. Essentially on the DTO layer you would a key to CDataSet (that you simply invent, perhaps just an increasing integer), storing the key instead of the item in m_refData/m_refParent, and running fixup manually during serialization/deserialization. You can also just store the index into the set of CDataSet, but that may make insertion etc more difficult. Up to you; since this is serialization you could argue that you won't insert (etc) outside of initial population and hence the raw index is fine and reliable.
This is, however, a very common scenario - so as an implementation-specific feature I've added optional (opt-in) reference tracking to my implementation (protobuf-net), which essentially automates the above under the covers (so you don't need to change your objects or expose the key outside of the binary stream).

God object - decrease coupling to a 'master' object

I have an object called Parameters that gets tossed from method to method down and up the call tree, across package boundaries. It has about fifty state variables. Each method might use one or two variables to control its output.
I think this is a bad idea, beacuse I can't easily see what a method needs to function, or even what might happen if with a certain combination of parameters for module Y which is totally unrelated to my current module.
What are some good techniques for decreasing coupling to this god object, or ideally eliminating it ?
public void ExporterExcelParFonds(ParametresExecution parametres)
{
ApplicationExcel appExcel = null;
LogTool.Instance.ExceptionSoulevee = false;
bool inclureReferences = parametres.inclureReferences;
bool inclureBornes = parametres.inclureBornes;
DateTime dateDebut = parametres.date;
DateTime dateFin = parametres.dateFin;
try
{
LogTool.Instance.AfficherMessage(Variables.msg_GenerationRapportPortefeuilleReference);
bool fichiersPreparesAvecSucces = PreparerFichiers(parametres, Sections.exportExcelParFonds);
if (!fichiersPreparesAvecSucces)
{
parametres.afficherRapportApresGeneration = false;
LogTool.Instance.ExceptionSoulevee = true;
}
else
{
The caller would do :
PortefeuillesReference pr = new PortefeuillesReference();
pr.ExporterExcelParFonds(parametres);
First, at the risk of stating the obvious: pass the parameters which are used by the methods, rather than the god object.
This, however, might lead to some methods needing huge amounts of parameters because they call other methods, which call other methods in turn, etcetera. That was probably the inspiration for putting everything in a god object. I'll give a simplified example of such a method with too many parameters; you'll have to imagine that "too many" == 3 here :-)
public void PrintFilteredReport(
Data data, FilterCriteria criteria, ReportFormat format)
{
var filteredData = Filter(data, criteria);
PrintReport(filteredData, format);
}
So the question is, how can we reduce the amount of parameters without resorting to a god object? The answer is to get rid of procedural programming and make good use of object oriented design. Objects can use each other without needing to know the parameters that were used to initialize their collaborators:
// dataFilter service object only needs to know the criteria
var dataFilter = new DataFilter(criteria);
// report printer service object only needs to know the format
var reportPrinter = new ReportPrinter(format);
// filteredReportPrinter service object is initialized with a
// dataFilter and a reportPrinter service, but it doesn't need
// to know which parameters those are using to do their job
var filteredReportPrinter = new FilteredReportPrinter(dataFilter, reportPrinter);
Now the FilteredReportPrinter.Print method can be implemented with only one parameter:
public void Print(data)
{
var filteredData = this.dataFilter.Filter(data);
this.reportPrinter.Print(filteredData);
}
Incidentally, this sort of separation of concerns and dependency injection is good for more than just eliminating parameters. If you access collaborator objects through interfaces, then that makes your class
very flexible: you can set up FilteredReportPrinter with any filter/printer implementation you can imagine
very testable: you can pass in mock collaborators with canned responses and verify that they were used correctly in a unit test
If all your methods are using the same Parameters class then maybe it should be a member variable of a class with the relevant methods in it, then you can pass Parameters into the constructor of this class, assign it to a member variable and all your methods can use it with having to pass it as a parameter.
A good way to start refactoring this god class is by splitting it up into smaller pieces. Find groups of properties that are related and break them out into their own class.
You can then revisit the methods that depend on Parameters and see if you can replace it with one of the smaller classes you created.
Hard to give a good solution without some code samples and real world situations.
It sounds like you are not applying object-oriented (OO) principles in your design. Since you mention the word "object" I presume you are working within some sort of OO paradigm. I recommend you convert your "call tree" into objects that model the problem you are solving. A "god object" is definitely something to avoid. I think you may be missing something fundamental... If you post some code examples I may be able to answer in more detail.
Query each client for their required parameters and inject them?
Example: each "object" that requires "parameters" is a "Client". Each "Client" exposes an interface through which a "Configuration Agent" queries the Client for its required parameters. The Configuration Agent then "injects" the parameters (and only those required by a Client).
For the parameters that dictate behavior, one can instantiate an object that exhibits the configured behavior. Then client classes simply use the instantiated object - neither the client nor the service have to know what the value of the parameter is. For instance for a parameter that tells where to read data from, have the FlatFileReader, XMLFileReader and DatabaseReader all inherit the same base class (or implement the same interface). Instantiate one of them base on the value of the parameter, then clients of the reader class just ask for data to the instantiated reader object without knowing if the data come from a file or from the DB.
To start you can break your big ParametresExecution class into several classes, one per package, which only hold the parameters for the package.
Another direction could be to pass the ParametresExecution object at construction time. You won't have to pass it around at every function call.
(I am assuming this is within a Java or .NET environment) Convert the class into a singleton. Add a static method called "getInstance()" or something similar to call to get the name-value bundle (and stop "tramping" it around -- see Ch. 10 of "Code Complete" book).
Now the hard part. Presumably, this is within a web app or some other non batch/single-thread environment. So, to get access to the right instance when the object is not really a true singleton, you have to hide selection logic inside of the static accessor.
In java, you can set up a "thread local" reference, and initialize it when each request or sub-task starts. Then, code the accessor in terms of that thread-local. I don't know if something analogous exists in .NET, but you can always fake it with a Dictionary (Hash, Map) which uses the current thread instance as the key.
It's a start... (there's always decomposition of the blob itself, but I built a framework that has a very similar semi-global value-store within it)