I'm desinging a web based game. In this game almost all actions will take certain amounth of time but i'm not sure about where to store and execute the actions.
For example a character want to go to A to B and let's say this will take 30 secs. In my character table there is a column called Location, witch is storing Id of current place. So i must change this Id after 30 seconds.
The best solution i could me so far is creating SQL jobs. Since i don't have envoirment to test how 100.000 Sql jobs will effect the server performance, i wanted to ask is there any other ways or should i stick to Sql jobs?
PS: Logic is mostly same with other web based games, any direct example from others games about how they handle such things will be appreciated
Using sql database will cause you alot of pain later on because is not ideal for what you are attempting https://gamedev.stackexchange.com/questions/40215/use-a-sql-database-for-a-desktop-game
only use sql if you want to store vast amount of login details other than that use something similiar to couchbase
nosql database
http://www.couchbase.com/why-nosql/nosql-database
just my 2 cents hope i helped
You don't need any job for this.
If we stay at the example above then we can say that every place where our character is can have additional information (in an extra table where the places and the characters are connected) such as when start the validity of the record:
Player A is at Brighton from 2014-05-01T00:00:00 to NULL
but he is moving to London which takes 30 secs
Player A is at London from 2014-06-09T10:30:30 to NULL and the
previous place record will be closed (set the to value) with the
current from date (2014-06-09T10:30:30).
I implemented a simple scheduling mechanism using only ASP.NET. You can find a proof of concept at http://weblogs.asp.net/ricardoperes/using-the-asp-net-cache-as-a-scheduler.
Related
I have a new idea and question about that I would like to ask you.
We have a CRM application on-premise / in house. We use that application kind of 24X7. We also do billing and payroll on the same CRM database which is OLTP and also same thing with SSRS reports.
It looks like whenever we do operation in front end which does inserts and updates to couple of entities at the same time, our application gets frozen until that process finishes. e.g. extracting payroll for 500 employees for their activities during last 2 weeks. Basically it summarize total working hours pulls that numbers from database and writes/updates that record where it says extract has been accomplished. so for 500 employees we are looking at around 40K-50K rows for Insert/Select/Update statements together.
Nobody can do anything while this process runs! We are considering the following options to take care of this issue.
Running this process in off-hours
OR make a copy of DB of Dyna. CRM and do this operations(extracting thousands of records and running multiple reports) on copy.
My questions are:
how to create first of all copy and where to create it (best practices)?
How to make it synchronize in real-time.
if we do select statement operation in copy DB than it's OK, but if we do any insert/update on copy how to reflect that on actual live db? , in short how to make sure both original and copy DB are synchronize to each other in real time.
I know I asked too many questions, but being SQL person, stepping into CRM team and providing suggestion, you know what I am trying to say.
Thanks folks for your any suggestion in advance.
Well to answer your question in regards to the live "copy" of a database a good solution is an alwayson availability group.
https://blogs.technet.microsoft.com/canitpro/2013/08/19/step-by-step-creating-a-sql-server-2012-alwayson-availability-group/
Though I dont think that is what you are going to want in this situation. Alwayson availability groups are typically for database instances that require very low failure time frames. For example: If the primary DB server goes down in the cluster it fails over to a secondary in a second or two at the most and the end users only notice a slight hiccup for a second.
What I think you would find better is to look at those insert statements that are hitting your database server and seeing why they are preventing you from pulling data. If they are truly locking the table maybe changing a large amount of your reads to "nolock" reads might help remedy your situation.
It would also be helpful to know what kind of resources you have allocated and also if you have proper indexing on the core tables for your DB. If you dont have proper indexing then a lot of the queries can take longer then normal causing the locking your seeing.
Finally I would recommend table partitioning if the tables you are pulling against are to large. This can help with a lot of disk speed issues potentially and also help optimize your querys if you partition by time segment (i.e. make a new partition every X months so when a query pulls from one time segment they only pull from that one data file).
https://msdn.microsoft.com/en-us/library/ms190787.aspx
I would say you need to focus on efficiency more then a "copy database" as your volumes arent very high to be needing anything like that from the sounds of it. I currently have a sql server transaction database running with 10 million+ inserts on it a day and I still have live reports hit against it. You just need the resources and proper indexing to accommodate.
We have a process in which several site servers send data to a central server (through a Linked Server). A new site has seen the job duration more than double in three weeks, and a couple of the other sites often fail due to run time overlap.
It is a two step process:
Insert new records
Update changed records
The insert only takes a few seconds, but the update takes anywhere from 5 to 20 minutes, depending on the site. I am able to change the query that drives the update and get it down to only a couple seconds, but still when put into an UPDATE statement it takes several minutes.
I am leaning towards moving the jobs to a single job on the central server, so it is a pull operation which, based on the testing I have done, should be much faster. However, my question is: What is considered "best practice" in this situation? I am going to have to change quite a bit to get this working properly, so I might as well do it right.
I am developing a VB.Net application. That application might be working on a LAN. MS Access as a back end will be used. I have developed many single user applications, but don't know of multi user , LAN, manage DB etc. How do I make the program as Multi user on LAN. Data will be accessed at the same time. How to manage such things.
Please give me some help and Guidance.
Thanks
Your VB application does not care how many people run it.
Your database, with MS Access, has some serious issues with multiple users. Get away from it if you can. SQL Server has a free version called SQL Express. If you only plan on 2 people, you might be OK with Access for a while but be prepared to support it more.
That was all the easy stuff, now you have to think about how you are going to handle multiple users trying to access and update the same data (concurrency).
Imagine this, you are a user looking at employee record 1 and so is someone else. You change the birthday and save. The the other user changes thier suppervisor and saves. How do you know something changed? What do you do if something changed? These are questions I cannot answer for you, you must decide based on your situation.
There are 2 main types of concurrency, optimistic and pessimistic. See this link for a great explaination and discussion on them: optimistic-vs-pessimistic-locking
You can look at this on a table-by-table basis.
If a table is never updated, you dont have to worry about concurrency
If a table is rarely updated, like a table of states, you can decide if it is worth the extra effort to add concurrency.
Everything else, pretty much should have some type of concurrency.
Now, the million dollar question, how?
You will find as many ways to handle concurrency as you will find colors in the rainbow. Here are some of the ones I like:
Simple number that you increment with each save. Small and easy.
DateTime stamp - As long as you dont expect to ever have 2 people save the same record during the same second, this is easy. (I personally dont like it by it's self)
User Name - Pretty simple gives a little bit of an audit by knowing who last inserted/edited the record but doesn't handle an issue I have seen to often. Imagine the same senerio as above but you had 2 instances of record 1. Now you change the data again, maybe supervisor, and when you save, you overwrite the changes from your first save with those of the second save.
Guid - VB can create a guid, SQL Server can create a guid and so can Access. It is nice an unique and most important, you can create it on the client so you dont have to requery the database after you save the record to get a refreshed record.
Combination of these. I like 2 and 3 myself. Gives a mini audit and is unique to the user.
If you use a DataAdapter, by default, MS will assume concurrency checking means to compare EVERY field to make sure it did not change. This works, but is completely un-scaleable and should not be done.
All of this depends on the size of your application and how you see it being used. Definately do some more research before you settle on a decision.
There are a number of solutions here.
If I may suggest a drastic alternative, have you considered pairing the client running on the user's computer with a server component (through a web service)? A simpler alternative would be for the client to talk directly to a SQL Server (or other database) instance through the network?*
*I'm not a fan of having client side apps talk directly to the database. It will mean maintenance headaches in the future, but I
included it to give you options
.
I found this random example via Google so YMMV.
sorry if my question is a bit ambiguous, I'll explain what i want to do.
i want to run a game on a webserver. its a turn based game, some of you people might have come across it.
Its a game called mafia: http://mafiascum.net/wiki/index.php?title=Newbie_Guide.
I know how it needs to work in terms of a mysql database a server side scripting language etc etc.
What i am not sure about is whats the best way to get a script to activate when the game starts, and be able to run a script every 3 minutes to update the game status:
once 10 people join the game starts
people vote during a 3 minute period. (votes would be stored in a database)
after 3 minutes a script needs to run to calculate the votes and remove a player
then 1 and a half minutes later the script needs to run again.
This cycle of 3 minutes, 1 and a half minutes need to repeat until a certain condition is met, i.e all players but 2 are dead or something.
when players refresh the page they need to be updated on the games status.
Ive read about sockets, and wonder if this might be a good path to take. would sockets be able to send json back to the clients? so that jquery can then update the client with game results.
Ideally i would like the the front end to be done in jquery and the backend script processing to be done by php or something.
How open would this be? in terms of people trying to cheat by sending attacks such as post variables sqli attacks etc etc.
Its quite a broad question, and i am sure there is more than one approcah so is more than one correct answer, but i would be intrested on peoples thoughts on how they would go about developing it.
Thanks for your time :)
I would simply use a CRON job or similar on the backend to update the status every x seconds as you have suggested.
To trigger a game start, simply fire off a PHP command to set your CRON job running.
This way the timing is controlled behind the scenes on the server, and you are free to update the status of the game using jQuery to your actual players.
This is not SO Meta question. I am using SO only as example.
In StackoverFlow each answer, each comment, each question, each vote has a effect which produces a badge at some point of time. I mean after every action a list of queries are tested.
E.g. If Mr.A up votes Mr.B Answer. So we have to check is this Mr.B's answer upvoted 100 times so give the Mr.B a badge , Has Mr.A upvoted 100th time so give him a badge.
It means I have to run at least 100 queries/IfElse for each action.
Now my real life example is I have an application where I receive online data from an attendance machine. When a user shows his card to machine. I receive this and store it as a record. Now based on this record I have multiple calculations.i.e Is he late. Is he late for continues 3 days. Is he in a right shift(Day shift/Night Shift). Is today holiday. Is this a overtime. Is he early.......etc.,etc.,etc.
What is the best strategy for this kind of requirements.
Update:
Can SO team guide us on this?
You use queues and workflows. This way you decouple the moment of the update from the actual notifications, allowing the system to scale. Tighly coupled, trigger based or similar solutions cannot scale, as each update has to wait for all the interested parties to react to the notification. The design of processing engine using workflows allows to easily add steps and notifications consumers by changing the data, w/o changing the schema.
For instance see how MSDN uses queues to handle similar problems with MSDN content: Building the MSDN Aggregation System.
Couldn't you just use "flags" (other tables, other columns, whatever) to indicate when those special cases occur? That way you would only have to do one lookup (per special case) than a ton of lookups and/or joins. You could record the changes (third day late, etc.) on insert.
Also, what to check depends on a threshold.
e.g. Is a person absent from last 3 days? That check is required only when the person is absent for 2 days.
I mean - you need not check everything, everytime.
Also, how much of info needs to be updated immediately? SO doesn't update things real time.
May be you must use two databases with online replication between them - one for getting realtime data and nothing else, in second you may use hard calculations (for example calculate all latings every 10 minutes or by requsts). Locate this databases on different servers.