I've got a virtual world in Adobe Flash AS2 using SmartFoxServerPro. One problem, everytime a user logs back in, his data resets, well the server never saves it. My question is: how to save permanent data to a server, so even when other people join, they'd see it. For example, if Player1 places a block from his inventory, then the following week he re-joins the game, he'd still see the block, cause the server would have saved hat block to it's map. Another example, how to make the server automatically have mobs on it, so that when user log in, they'd have monsters waiting for them, and even when the mobs, life is 50%, all players would see his life as 50%.
That is my question.
You need to persist your game state in a database and load the information when your user logs in to the game.
More info on databases and smartfox here:http://docs2x.smartfoxserver.com/DevelopmentBasics/database-recipes
Related
The problem: a .NET application trying to save many records to SQL Server. BeginTrans was used, and right before commit a warning messages shows to end user to confirm to proceed to save data or not. The user simply left the computer and go away!!!
Now all other users are unable to access the locked records. Sometimes almost the entire system is affected. Almost all transaction are updating the same records; the confirmation message must be shown after data gets updated, and before commit so if user can rollback. What could be the best solution?
If no solution is found, the last thing i might do is to rollback, show the confirmation message, if user accepts then i will again save the data without any confirmation message (which i don't thing the right way)
My question is: What best i can do? any ideas?
This sounds like a WinForms app? It also sounds like you want to confirm the intent of user's action. Are you in a position to only start the transaction once they confirm they intend to save the data?
Ideally, you should
Prompt the user via [OK | Cancel]
Perform the database transaction
If the result of the transaction is deadlock (or any other failure), inform the user the save operation failed
In other words, the update of records should be a synchronous call.
EDIT: after understanding the specifics as mentioned in the comment below, I would recommend some form of server side task queue that all these requests would need to flow through. Your client would submit a request to the server, and the server application would then become the software responsible for updating records in the database. The clients would make their requests to this application and would be processed in the order they were received. I don't have much experience with inventory tracking software, but understand it's need to be absolutely correct. So this is just a rough idea, I'm sure someone with more experience in inventory tracking will have a better pattern. The proposed pattern creates a large bottleneck on the server that is responsible for updating the records. For example, this pattern would be terrible for someone like Amazon.
My program enables user's to play online against other players. the way I achieve this is by having a lobby form with a datagridview displaying all the matches in a datatable entitled "matches". from their you can select a match (send a request) or host one which will direct you to a "waiting" form and insert your match into the database. The problem is that this database fills up fast with old matches because I have coded it to delete records on form close however this does not run if the process (whole program) is ended from say task manager and I am left with an old game in the database. my question is how can I ensure that when a user stops hosting, their corresponding match is deleted. The only solution I can think of involves having a server constantly pinging all the matches and deleting them if they do not respond.
Your idea of the server pinging the client is a good one. But it can be reversed. For example, by using a TCP connection, you can send the username or ID of the player to the server every two minutes through the client. If the player's client has not pinged the server in two minutes, the server can assume that the player is offline and delete the old 'match' instead of the client. You can run the server dedicated and it will always work well.
For more information on TCP data communication, just google it!
Hope this helped,
Rodit
We have data stored in a data warehouse as follows:
Price
Date
Product Name (varchar(25))
We currently only have four products. That changes very infrequently (on average once every 10 years). Once every business day, four new data points are added representing the day's price for each product.
On the website, a user can request this information by entering a date range and selecting one or more products names. Analytics shows that the feature is not heavily used (about 10 users requests per week).
It was suggested that the data warehouse should daily push (SFTP) a CSV file containing all data (currently 6718 rows of this data and growing by four each day) to the web server. Then, the web server would read data from the file and display that data whenever a user made a request.
Usually, the push would only be once a day, but more than one push could be possible to communicate (infrequent) price corrections. Even in the price correction scenario, all data would be delivered in the file. What are problems with this approach?
Would it be better to have the web server make a request to the data warehouse per user request? Or does this have issues such as a greater chance for network errors or performance issues?
Would it be better to have the web server make a request to the data warehouse per user request?
Yes it would. You have very little data, so there is no need to try and 'cache' this in some way. (Apart from the fact that CSV might not be the best way to do this).
There is nothing stopping you from doing these requests from the webserver to the database server. With as little information as this you will not find performance an issue, but even if it would be when everything grows, there is a lot to be gained on the database-side (indexes etc) that will help you survive the next 100 years in this fashion.
The amount of requests from your users (also extremely small) does not need any special treatment, so again, direct query would be the best.
Or does this have issues such as a greater chance for network errors or performance issues?
Well, it might, but that would not justify your CSV method. Examples and why you need not worry, could be
the connection with the databaseserver is down.
This is an issue for both methods, but with only one connection per day the change of a 1-in-10000 failures might seem to be better for once-a-day methods. But these issues should not come up very often, and if they do, you should be able to handle them. (retry request, give a message to user). This is what enourmous amounts of websites do, so trust me if I say that this will not be an issue. Also, think of what it would mean if your daily update failed? That would present a bigger problem!
Performance issues
as said, this is due to the amount of data and requests, not a problem. And even if it becomes one, this is a problem you should be able to catch at a different level. Use a caching system (non CSV) on the database server. Use a caching system on the webserver. Fix your indexes to stop performance from being a problem.
BUT:
It is far from strange to want your data-warehouse separated from your web system. If this is a requirement, and it surely could be, the best thing you can do is re-create your warehouse-database (the one I just defended as being good enough to query directly) on another machine. You might get good results by doing a master-slave system
your datawarehouse is a master-database: it sends all changes to the slave but is inexcessible otherwise
your 2nd database (on your webserver even) gets all updates from the master, and is read-only. you can only query it for data
your webserver cannot connect to the datawarehouse, but can connect to your slave to read information. Even if there was an injection hack, it doesn't matter, as it is read-only.
Now you don't have a single moment where you update the queried database (the master-slave replication will keep it updated always), but no chance that the queries from the webserver put your warehouse in danger. profit!
I don't really see how SQL injection could be a real concern. I assume you have some calendar type field that the user fills in to get data out. If this is the only form just ensure that the only field that is in it is a date then something like DROP TABLE isn't possible. As for getting access to the database, that is another issue. However, a separate file with just the connection function should do fine in most cases so that a user can't, say open your webpage in an HTML viewer and see your database connection string.
As for the CSV, I would have to say querying a database per user, especially if it's only used ~10 times weekly would be much more efficient than the CSV. I just equate the CSV as overkill because again you only have ~10 users attempting to get some information, to export an updated CSV every day would be too much for such little pay off.
EDIT:
Also if an attack is a big concern, which that really depends on the nature of the business, the data being stored, and the visitors you receive, you could always create a backup as another option. I don't really see a reason for this as your question is currently stated, but it is a possibility that even with the best security an attack could happen. That mainly just depends on if the attackers want the information you have.
I have been getting my feet wet with Core Data. I'm writing a card game and I'm able to store and retrieve game statistics. I'm also storing the game's state after each move to allow the application to resume a game that was in progress when the application quit and to also facilitate my home-brew undo system.
Unfortunately, the longer I play my game the slower it feels. I think this is because after each move I'm storing 52 cards and their specific states in SqlLite. I suspect that this just gets slower the more data I cram into the DB.
Because of this, I plan to try using the built-in undo management in Core Data. (I didn't remember this was there until it was too late on my initial implementation.) My question is, if the app is closed mid game, can it be restarted with the undo management in the same state?
IE: Imagine a user makes ten moves in this game. They would be able to undo ten times. If they quit the app and close it entirely and then restart the app, can I return Core Data to a state where the user will still be able to do the ten undo steps?
A little bit of research suggests I might be able to simply use NSCoding to persist the NSManagedObjectContext to a serialized file when the app is closed and then restore it's state from this file when the app is restarted.
Am I on the right path? Any suggestions?
Thanks!
NO UndoManager is not persistent.
Yes you may use NSCoding or even a Plist for saving the state.
For more information on this topic you may refer
http://www.cimgf.com/2011/10/11/core-data-and-the-undo-manager/
I'm developing a Flash game in ActionScript 2, and the issue if that this game has to count the time securely.
It can't count the time from the Date class because the Flash Player takes the time from the local computer, and the user can change the local time so the time reported would be fake.
I haven't considerend to take the time from the server because there's a 3WH (3 way handshake) time and it would not be practical.
What do you sugest me??
You cannot perform secure computations on the user's system. They can manipulate it.
If that is a problem, your only real choice is to do it on the server. Of course, they could sandbox your app and fake a server conversation, so that's not entirely secure from within the client, but in most cases that won't cause a big problem since it should just affect that user (unless the data from the manipulated/forged server connection is then sent somewhere to affect other users).
When you are developing games that run on a system that you do not control there is basically no solution, you can make it hard for people but you can never be certain unless you basically modify your game to run on the server for all important parts. Even if you would make the game call the server for the time only people can insert a proxy and fake the response...
So is you really want to be sure no one messes with the game you have to make it run on the server (I know, lots of the time this is unwanted and/or impossible). In all other cases you can make it hard (obfuscate game code, encrypt communication) but never impossible - see google for lots of suggestions on making it hard, or see here and here.
The best way of solving the issue is to remove the incentive for players to cheat, so they simply won't try it at all -- of course lots of the time this is really hard.
See also: Cheat Engine, in case you didn't know about that one.