I'm trying to build a model of the servers and applications at my workplace. A server can host many applications. An application can be hosted across many servers.
Normally I would just have the host class contain a List, and the application class a List. However, there are a few fields that are specific to the particular host-application relationship. For example, UsedMb represents the amount of disk-space used by an application on a host.
I could, of course have a HostedApplicationclass representing an intermediate object which would hold the UsedMb field. Both Host and Application classes would then contain a List.
The problem is, however, that an application needs also to know about some aspects of its Host that would be included in the Host class (for example, the hosts are geographically distrubuted; an application needs to know how many data centres it is hosted in, so it needs to be able to check the DC names of all its hosts.
So instead I could have the HostedApplication class hold references to both the Host object and Application object it refers to. But then in some cases I will need to loop through all applications (and in other cases, all hosts). Therefore I would need 3 separate lists, a List, and List, and a List, to be able to loop through all three as needed.
My basic question is, what is the standard way of dealing with this sort of configuration? All options have advantages and disadvantages. The last option I mentioned seems most correct, but is having three lists overkill? Is there a more elegant solution?
Ideally i would be able to talk to you about the problem, but here is a potential solution based on my rough understanding of the requirements ( c++ style with a lot of implementation left out)
class Host {
public:
string GeographicLocation() const;
string DCName() const;
};
class HostAsAppearsToClient : public Host {
HostAsAppearsToClient(const Host&);
// Allows Host -> HostAsAppears... conversion
size UsedMB() const;
void UseMoreMB(size) const;
};
class Client {
HostAsAppearsToClient* hosts;
void AddHost(const Host& host) {
// Reallocate enough size or move a pointer or whatever
hosts[new_index] = HostAsAppearsToClient(host);
hosts[new_index].UseMoreMB(56);
}
void DoSomething() {
hosts[index].UsedMB();
// Gets the MB that that host is using, and all other normal host data if
// we decide we need it ...
print(hosts[index].DCName());
}
};
int Main() {
Host* hosts = new Host[40];
Client* clients = new Client[30];
// hosts[2].UsedMB() // Doesn't allow
}
I fully expect that this does not meet your requirements, but please let me know in what way so that I can better understand your problem.
EDIT:
VBA .... unlucky :-P
It is possible to load dll's in VBA, which would allow you to write and compile your code in any other language, and just forward the inputs and outputs through VBA from the UI to the DLL, but i guess its up to you if thats worth it. Documentation on how to use a dll in VBA Excel: link
Good Luck!
Related
i was reading the blogs for "What about the code? Why does the code need to change when it has to run on multiple machines?".
And i came across one line which i am not getting it, can anyone please help me to understand it with simple or any example.
"There should be no static instances in the class. Static instances hold application data and when a particular server goes down, all the static data/state is lost. The app is left in an inconsistent state."
Assuming: Stating instance is an instance which can be at most once per process or context - e.g. in java there is at most one copy of a static class, with all data (or state) that the class contains.
So it is very simple memory model for a static class in a single node/jvm/process. Since there is a single copy of data, it is quite straightforward to reason about it. For example, you one may update the data and every next reader will see the updated information. This is a bit more complicated for multithreaded programs, but is still straightforward comparing to distributed systems.
Clearly in a distributed system, every node may have at most one static class with state. Which means if a system contains several nodes - a distributed system - then there are several copies of data.
Having several copies is a problem. It is hard to reason about such system - every node may have some unique data and data may differ on different node. It is very hard to reason about such data: how it is synced? Availability vs consistency?
For example, take a simple counter. In a single node system, a static instance may keep the score. If one writer increased the counter, the next reader will see the increased value (assuming multithreaded part is implemented correctly, which is not that complicated).
Same counter is a distributed system is much more complicated. A writer may write to one node, but a reader may read from another.
Basically, having state on nodes is a hard problem to solve. This is the primary reason to use some distributed storage layer e.g. Hbase, Cassandra, AWS DynamoDB. All these storage systems have predictable behaviour which helps to reason about correctness of programs.
For example, there are just two servers which accepts payments from clients.
Then somebody decided to create static class to be friendly with mutli threading:
public static class Payment
{
public static decimal Amount;
public static bool IsMoneyReceived;
public static string UserName;
}
Then some client, let's call him John, decided to buy something in shop. John sent money and static class has data about this purchase. Some service is going to write data into database from
Payment class, however, electicity was turned off. Load balancer knows that the server is not responding and redirects John requests to another server which knows nothing about data in
Payment class.
I've been seraching for a while how should I test my data access layer with not a lot of success. Let me list my concerns:
Unit tests
This guy (here: Using InMemoryConnection to test ElasticSearch) says that:
Although asserting the serialized form of a request matches your
expectations in the SUT may be sufficient.
Does it really worth to assert the serialized form of requests? Do these kind of tests have any value? It doesn't seem likely to change a function that should not change the serialized request.
If it does worth it, what is the correct way to assert these reqests?
Unit tests once again
Another guy (here: ElasticSearch 2.0 Nest Unit Testing with MOQ) shows a good looking example:
void Main()
{
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
};
var mockSearchResponse = new Mock<ISearchResponse<Person>>();
mockSearchResponse.Setup(x => x.Documents).Returns(people);
var mockElasticClient = new Mock<IElasticClient>();
mockElasticClient.Setup(x => x
.Search(It.IsAny<Func<SearchDescriptor<Person>, ISearchRequest>>()))
.Returns(mockSearchResponse.Object);
var result = mockElasticClient.Object.Search<Person>(s => s);
Assert.AreEqual(2, result.Documents.Count()).Dump();
}
public class Person
{
public int Id { get; set;}
}
Probably I'm missing something but I can't see the point of this code snippet. First he mocks an ISearchResponse to always return the people list. then he mocks an IElasticClient to return this previous search response to any search request he makes.
Well it doesn't really surprise me the assertion is true after that. What is the point of these kind of tests?
Integration tests
Integration tests do make more sense to me to test a data access layer. So after a little search i found this (https://www.nuget.org/packages/elasticsearch-inside/) package. If I'm not mistaken this is only about an embedded JVM and an ES. Is it a good practice to use it? Shouldn't I use my already running instance?
If anyone has good experience with testing that I didn't include I would happily hear those as well.
Each of the approaches that you have listed may be a reasonable approach to take, depending on exactly what it is you are trying to achieve with your tests. you haven't specified this in your question :)
Let's go over the options that you have listed
Asserting the serialized form of the request to Elasticsearch may be a sufficient approach if you build a request to Elasticsearch based on a varying number of inputs. You may have tests that provide different input instances and assert the form of the query that will be sent to Elasticsearch for each. These kinds of tests are going to be fast to execute but make the assumption that the query that is generated and you are asserting the form of is going to return the results that you expect.
This is another form of unit test that stubs out the interaction with the Elasticsearch client. The system under test (SUT) in this example is not the client but another component that internally uses the client, so the interaction with the client is controlled through the stub object to return an expected response. The example is contrived in that in a real test, you wouldn't assert on the results of the client call as you point out but rather on the output of the SUT.
Integration/Behavioural tests against a known data set within an Elasticsearch cluster may provide the most value and go beyond points 1 and 2 as they will not only incidentally test the generated queries sent to Elasticsearch for a given input, but will also be testing the interaction and producing an expected result. No doubt however that these types of test are harder to setup than 1 and 2, but the investment in setup may be outweighed by their benefit for your project.
So, you need to ask yourself what kinds of tests are sufficient to achieve the level of assurance that you require to assert that your system is doing what you expect it to do; it may be a combination of all three different approaches for different elements of the system.
You may want to check out how the .NET client itself is tested; there are components within the Tests project that spin up an Elasticsearch cluster with different plugins installed, seed it with known generated data and make assertions on the results. The source is open and licensed under Apache 2.0 license, so feel free to use elements within your project :)
Is there a design pattern that would prevent a classes method from running until one or more requirements are made?
An example could be a Car and getting it going, to start the car it will need petrol, keys in the ignition, and then turning the key.
How would one solve the problem of dependant requirements (and the necessary ordering), the ignition won't start without a key in, and a key wont turn if its not inserted.
Here are two methods I know which both have pitfalls:-
void startCar()
if checkPetrol()
if checkKeyIn()
if checkKeyTurn()
startEngine()
Also using a switch statement is possible but then it requires a lot of checks as well.
How can one solve this?
Maybe there are a lot of other more suitable solutions, but Observer pattern can be used here. In fact it will allow you to define a dependency between objects so that when one object changes state, all its dependents are notified and updated automatically. I'm thinking of something like combine it with Facade design pattern (only startCar() is public)with proper Exception handling:
void startCar(){
if (checkPetrol()){
if (checkKeyIn()){
if (checkKeyTurn()){
startEngine()
}else{
throw new CarCustomException("You need to turn the key in order to start.");
}
}else{
throw new CarCustomException("Car can't start without a key.");
}
}else{
throw new CarCustomException("Not enough fuel.");
}
}
I'd like to write a Minecraft mod which adds a new type of mob. Is that possible? I see that, in Bukkit, EntityType is a predefined enum, which leads me to believe there may not be a way to add a new type of entity. I'm hoping that's wrong.
Yes, you can!
I'd direct you to some tutorials on the Bukkit forums. Specifically:
Creating a Meteor Entity
Modifying the Behavior of a Mob or Entity
Disclaimer: the first is written by me.
You cannot truly add an entirely new mob just via Bukkit. You'd have to use Spout to give it a different skin. However, in the case you simply want a mob, and are content with sharing a skin of another entity, then it can be done.
The idea is injecting the EntityType values via Java's Reflection API. It would look something like this:
public static void load() {
try {
Method a = EntityTypes.class.getDeclaredMethod("a", Class.class, String.class, int.class);
a.setAccessible(true);
a.invoke(a, YourEntityClass.class, "Your identifier, can be anything", id_map);
} catch (Exception e) {
//Insert handling code here
}
}
I think the above is fairly straightforward. We get a handle to the private method, make it public, and invoke its registration method.id_map contains the entity id to map your entity to. 12 is that of a fireball. The mapping can be found in EntityType.class. Note that these ids should not be confused with their packet designations. The two are completely different.
Lastly, you actually need to spawn your entity. MC will continue spawning the default entity, since we haven't removed it from the map. But its just a matter of calling the net.minecraft.server.spawnEntity(your_entity, SpawnReason.CUSTOM).
If you need a skin, I suggest you look into SpoutPlugin. It does require running the Spout client to join to such a server, but the possibilities at that point are literally infinite.
It would only be possible with client-side mods as well, sadly. You could look into Spout, (http://www.spout.org/) which is a client mod which provides an API for server-side plugins to do more on the client, but without doing something client side, this is impossible.
It's not possible to add new entities, but it is possible to edit entity behaviors for example one time, I made it so that you could tame iron golems and they followed you around.
Also you can sort of achieve custom looking human entities by accessing player entities and tweaking network packets
It's expensive as you need to create a player account to achieve this that then gets used to act as a mob. You then spawn a named entity and give it the same behaviour AI as you would with an existing mob. Keep in mind however you will need to write the AI yourself (you could borrow code straight from craftbukkit/bukkit) and you will need to push the movement and events of this mob to players within sight .. As technically speaking all your doing is pushing packets to the client from the serve on what's actually happening but if your outside that push list nothing will happen as other players will see you being knocked around by invisible something :) it's a bit of a mental leap :)
I'm using this concept to create Npc that act as friendly and factional armies. I've also used mobs themselves as friendly entities (if you belong to a dark faction)
I'd like to personally see future server API that can push model instructions to the client for server specific cache as well as the ability to tell a client where to download mob skins ..
It's doable today but I'd have to create a plugin for the client to achieve this which is then back to a game of annoyance especially when mojang push out a new release and all the plugins take forever to rise with its tide
In all honesty this entire ecosystem could be managed more strategically but right now I think it's just really ad hoc product management (speaking as a former product manager of .net I'd love to work on this strategy it would be such a fun gig)
I am writing an application where there are a bunch of handlers.
I am trying to see if i should package these handlers within the same apache module or have a seperate module for each handler.
I agree this is a generic question and would depend on my app, but i would like to know the general considerations that i have to make and also the trade-offs in each of the approach.
It will be really good if somebody can tell me the advantages/disdvantages of both approaches.
You have not specified if all these handlers need to perform some interrelated tasks or are they going to work independently of each other.
I would go for keeping the related handlers in the same module, and rest of them in their own module. I believe it makes the configuration of the server easy (we can easily load/unload a module as required) and the code base remains well managed too.
For instance suppose we needed two handlers that share some data, then we could keep then in the same module:
static int my_early_hook(request_rec
*r) {
req_cfg *mycfg = apr_palloc(r->pool, sizeof(req_cfg));
ap_set_module_config(r->request_config,
&my_module, mycfg);
/* set module data */ }
static int my_later_hook(request_rec
*r) {
req_cfg *mycfg = ap_get_module_config(r->request_config,
&my_module);
/* access data */ }