Cutscene system ruled just by time? - scripting

This is my first time I need to create a cutscene system. I have read a lot on the web about different ways to accomplish this task and have mixed them with my own ideas. Now, it is implementation time, but I need some info from other people with more experience than me in this field. Here we go:
1) Some years ago, I implemented a system that actions could be queued in a serial/parallel way, building a tree of actions that when executed created the final result. This can be sure used as the cutscene system director, but, wouldn't it be so much simple to just have a timeline with actions ran at a certain time? An example:
playMp3(0, "Song.mp3)
createObject(0, "Ogre", "Ogre1")
moveObject(1, "Ogre1", Vector3(100,100,1))
This way everything would be really simple to script. Serial actions are supported buy spreading them correctly in time and parallel actions just need to have shared time ranges.
One problem I have seen is that an action like Sync() (This just waits for all actions to finish before start the other that come afterwards) can't be used because we're using absolute time to trigger our actions. Anyway, a solution could be to have our actions layered based on last "sync()". I mean something like this:
playMp3(0, "Song.mp3)
createObject(0, "Ogre", "Ogre1")
moveObject(1, "Ogre1", Vector3(100,100,1))
sync()
moveObject(0,....);
createObject(1,....);
As you may notice, times after sync() starts again from 0, so, when a sync() is ran, and it determines all previous actions from last sync() are finished, timeLine elapsed time would be 0 again. This can be seen as Little cutscene action groups.
2) The previous explanation needs all actions to be added at the beginning of the cutscene playing. Is this how it usually is done? Or do you usually add actions to the timeline as they are needed?
Well, I could be wrong here, but I think this could be a nice & simple way to lay the actions for a cutscene. What do you think?.
Thanks in advance.

I've done a few of these systems. I'll tell you what I like to use I hope this will answer your questions.
One of the first cutscenes system I did used LISP dialect because it is just couple of hours work to get a parser working. It used to be something like...
(play "song1.mp3")
(set "ogre1" (create "ogre"))
(moveTo "ogre1" '(100, 100, 100))
(wait 1)
I created something like virtual machine (VM) that was processing my scripts. The VM didn't use separate thread instead it had update function that was executing X amount of instructions or until it hits some synchronization instruction like wait for 1 sec.
At that time this system had to work on J2ME device which didn't have XML parser and XML parser was too much code to add. These days I'm using XML for everything except sounds and textures.
These days I'm using keyframe systems as BRPocock suggested. The problem is that this will be harder to manage without proper tools. If you using already some 3D software for your models I'll suggest you to investigate the option to use that product. I use Blender for cutscenes for personal projects since it's free at my work place we use Maya and 3ds Max, but the idea is the same. I export to COLLADA and then I have my animation tracks with keyframes. The problem is that COLLADA format is not the simplest it is made to be flexible and require decent amount of work to extract what you need from it.
The problem you will have with your format is to describe the interpolation so you want to move the ogre from one position to another... how long is this going to take? The advantage of keyframe system is that you can specify the position of the ogre in time... but scripting for such system without a tool will be difficult. Still here is a simple suggestion for format:
(play 'song1.mp3')
(entity 'ogre'
(position (
0 (100 100 100)
2 (100 200 100)
5 (100 300 100)
7 (100 300 400)
8 (100 300 500))
(mood (
0 'happy'
7 'angry'))
(... more tracks for that entity if you need ...))
(entity 'human'
(position (.....)))
Now with format like this you can see at what time where the ogre has to be. So if you have time 1.5 sec in the cutscene you can interpolate between the keyframes with time 0 and 2 sec. Where mood can be something you don't interpolate just swich when the right tome comes. I think this format is going to be more flexible and will solve your sync issues but I wouldn't suggest you writing it by hand without tools for big scripts. If your cutscenes are going to be just few sec with a few entities then it may be good for your.

Related

Optaplanner: How to calculate score delta for move

I'm using Optaplanner to automatically solve school timetables. After a timetable has been solved the user will manually change some lessons and will get feedback on how this affects the score via the following call:
scoreManager.updateScore(timetable);
This call takes some 200ms and will, I assume, do a complete evaluation. Im trying to optimize this and want to only pass in a Move object so that Optaplanner only has to recalculate the delta, like:
scoreManager.updateScore(previousTimetable,changeMove);
Is there a way to do this?
There really is no way how to do just a single move. You don't do moves - the solver does moves. You can only make external problem changes to the solution. You should look into the ProblemChange interface and its use in the SolverManager.
However, the problem change will likely reset the entire working solution anyway. And after the external change is done, you're not guaranteed that the solution will still make sense. (What if it breaks hard constraints now?) You simply need to expect and account for the fact that, after users submit their changes, the solver will need to run; possibly even for a prolonged period of time.

Getting ScoreExplanation during/after a Custom Move

My solution is for VRPTW and I've created a Custom Move.
After a custom move has been tried (prior to being accepted), I would like to see a score breakdown of it (not just the score itself).
Where is a good location to use ScoreExplanation to see the detailed score breakdown? Eg. in my AbstractMove implementation somewhere?
I have TRACE mode on and can see the score. I've tried pulling it up upon the next custom move being run and it seemed to be working since it was retrieving the correct score from the previous custom move, but when I looked at the ScoreExplanation, it looks completely off (it doesn't add up to the score).
Score explanations are not designed to be used inside of a step, they are far too slow for that. You are free to use the ScoreManager API any time you like, but you will pay a heavy performance penalty if you do that on the solver thread or the move threads.

Solving VRPTW using OptaPlanner

I have tried to run example https://github.com/droolsjbpm/optaplanner/tree/master/optaplanner-examples/src/main/java/org/optaplanner/examples/vehiclerouting
as it is written here:
http://docs.jboss.org/optaplanner/release/6.3.0.Final/optaplanner-docs/html_single/index.html#downloadAndRunTheExamples with data set cvrptw-25customers.xml . When I changed readyTime and dueTime in some customers, it didn't result in any change in score. It looks like this program doesn't care about time windows. Should I change something in Java classes? My goal is to get time needed to drive to all customers, taking into account all time windows.
This should work, I 've done it several times myself. Potential causes:
Did you load the xml file again? Try changing the VehicleRoutingPanel code so it's obvious which customer you changed and if indeed the values changed. Compare the screenshot of before and after your change (or duplicate the xml file so you keep the original).
If some arrival times and dueTimes change, the score doesn't need to change. Try making very small time windows, at annoying times, that should definitely impact the score and make it worse.

Can I make time-lapse view in perforce faster?

I have problem with perforce.
I like perforce's time-lapse view function very much.
It helps me to find who did mistake.
The problem is when some file is pretty big and frequently changed,
opening time-lapse view takes very long time.
So, I need some function like SQL (select * from time-lapse data top 100), that means I just need lasted 100(or 50? 20?) changed history to find what changed recently.
Do perforce have that function? or is there any kinds of plug-in or perforce's commmand?
Or I want to hear your know-how how to find changed history faster.
Thanks in advance.
I love time lapse view, but I often start with the "File History" view. Since, as you point out, the most interesting changes are the recent ones, I generally look through the recent changes and their descriptions first. Often, I see a change that looks particularly interesting and I study that changelist by itself and see what I'm interested in.
Regarding the speed of time lapse view, I wonder whether the problem is on your server or on your client. A couple things to try:
Is time lapse view also slow when you try it on a colleague's workstation?
If you run 'p4 annotate >tmp', is that also slow?
If 'p4 annotate' is fast, you might find it worth using for those particularly large files with very long histories. Time lapse view is very powerful and easy to read, but it collects a vast amount of information from the server, and then must format that information for display.
In my case, when I bring up time lapse view, I'm generally planning to study the results for some time, so I'm willing to wait a few seconds while it loads.
If the problem is that your server is overloaded, you should contact your Perforce administrator and see what he can do. Perhaps he can add more resources (typically memory) to your server, or perhaps you should consider deploying a read-only replica, which can service operations like time lapse view entirely from the replica without requiring any cycles from the main server. Perforce technical support is always happy to help with problems like these.

Performance metrics on specific routines: any best practices?

I'd like to gather metrics on specific routines of my code to see where I can best optimize. Let's take a simple example and say that I have a "Class" database with multiple "Students." Let's say the current code calls the database for every student instead of grabbing them all at once in a batch. I'd like to see how long each trip to the database takes for every student row.
This is in C#, but I think it applies everywhere. Usually when I get curious as to a specific routine's performance, I'll create a DateTime object before it runs, run the routine, and then create another DateTime object after the call and take the milliseconds difference between the two to see how long it runs. Usually I just output this in the page's trace...so it's a bit lo-fi. Any best practices for this? I thought about being able to put the web app into some "diagnostic" mode and doing verbose logging/event log writing with whatever I'm after, but I wanted to see if the stackoverflow hive mind has a better idea.
For database queries, you have a two small problems. Cache: data cache and statement cache.
If you run the query once, the statement is parsed, prepared, bound and executed. Data is fetched from files into cache.
When you execute the query a second time, the cache is used, and performance is often much, much better.
Which is the "real" performance number? First one or second one? Some folks say "worst case" is the real number, and we have to optimize that. Others say "typical case" and run the query twice, ignoring the first one. Others says "average" and run in 30 times, averaging them all. Other say "typical average", run the 31 times and average the last 30.
I suggest that the "last 30 of 31" is the most meaningful DB performance number. Don't sweat the things you can't control (parse, prepare, bind) times. Sweat the stuff you can control -- data structures, I/O loading, indexes, etc.
I use this method on occasion and find it to be fairly accurate. The problem is that in large applications with a fairly hefty amount of debugging logs, it can be a pain to search through the logs for this information. So I use external tools (I program in Java primarily, and use JProbe) which allow me to see average and total times for my methods, how much time is spent exclusively by a particular method (as opposed to the cumulative time spent by the method and any method it calls), as well as memory and resource allocations.
These tools can assist you in measuring the performance of entire applications, and if you are doing a significant amount of development in an area where performance is important, you may want to research the tools available and learn how to use one.
Some times approach you take will give you a best look at you application performance.
One things I can recommend is to use System.Diagnostics.Stopwatch instead of DateTime ,
DateTime is accurate only up to 16 ms where Stopwatch is accurate up to the cpu tick.
But I recommend to complement it with custom performance counters for production and running the app under profiler during development.
There are some Profilers available but, frankly, I think your approach is better. The profiler approach is overkill. Maybe the use of profilers is worth the trouble if you absolutely have no clue where the bottleneck is. I would rather spend a little time analyzing the problem up front and putting a few strategic print statements than figure out how to instrument your app for profiling then pour over gargantuan reports where every executable line of code is timed.
If you're working with .NET, then I'd recommend checking out the Stopwatch class. The times you get back from that are going to be much more accurate than an equivalent sample using DateTime.
I'd also recommend checking out ANTS Profiler for scenarios in which performance is exceptionally important.
It is worth considering investing in a good commercial profiler, particularly if you ever expect to have to do this a second time.
The one I use, JProfiler, works in the Java world and can attach to an already-running application, so no special instrumentation is required (at least with the more recent JVMs).
It very rapidly builds a sorted list of hotspots in your code, showing which methods your code is spending most of its time inside. It filters pretty intelligently by default, and allows you to tune the filtering further if required, meaning that you can ignore the detail of third party libraries, while picking out those of your methods which are taking all the time.
In addition, you get lots of other useful reports on what your code is doing. It paid for the cost of the licence in the time I saved the first time I used it; I didn't have to add in lots of logging statements and construct a mechanism to anayse the output: the developers of the profiler had already done all of that for me.
I'm not associated with ej-technologies in any way other than being a very happy customer.
I use this method and I think it's very accurate.
I think you have a good approach. I recommend that you produce "machine friendly" records in the log file(s) so that you can parse them more easily. Something like CSV or other-delimited records that are consistently structured.