Delayed inputs racing game - gml

i have never used game-maker before, but it seemed pretty easy to use for my school project. I want to make a small racing game, but the car has varying degrees of delay in the controls. The project is for exploring latency. so the delay in the controls could be like .05 seconds or .5 seconds, but it has to save the inputs and output in that order. Do you know how i can do this? i don't really know any commands in the language so any help would be greatly appreciated.
Also id like to add a survey sheet at the end that save the data to like an excel file, is this possible with gml?

A common way to simulate latency would be to create one or more ds_queue that you add new inputs in and pull inputs for the current frame from. The queue's initial size will determine the latency in frames.
For data export, the easiest is to produce a CSV/TSV, which is but a text file with comma or tab separators and can be imported into a wide variety of sheet-editing software.

Related

I'm looking to create an automated numbering system for custom paint by number kits in photoshop

So I know very little about programming all around. I'm adept at photoshop and I'm looking to automate the numbering system for making these paint by number kits. I convert the images into vector format and set a maximum number of color variations. I then use adobe illustrator to create the outlined partitions of the image by color. This is all well and good, it's automated and efficient as far as I need.
My dilemma is that I do not have a system that can number these partitions in a clear and uniform fashion. I must do this tediously in photoshop, taking hours to finish.
I am looking to create or find a system that will do this last step automatically.
My vison for how this would look would be numbers, 1-20 or so depending on the set color cap, evenly distributed across each partition in uniform font and size. The idea is that there would be a grid of 1 number (this number would be the reference to the color needed in this partition) spread across larger partitions and only a few of 1 number on the smaller partitions. It would hopefully look like so:
You can see here how tedious this can become.
I don't know how to accomplish this, but I'm wondering how complicated this process would be in theory and would it be better for me to learn how to do it myself, hire a professional, or continue the hand numbering. It's creating a labor cap on my small business that is preventing me from further growth.
Any and all help is very much appreciated; if I can provide more context or specifications I would be more than happy to do so. Thank you!
Just for fun I've managed to tweak old Johnware's script (Circle Fill). Now it can fill with given letters (numbers for example). It works to a degree, but the result far from ideal:
Probably it can be used for start.
I believe a real programmer could make it way better.
My tweaked version of the script is here: https://disk.yandex.ru/d/Ze4-1DQoNRVF1g
Update
I'm improved the script further. Now it:
works more precise
handles several selected paths
remembers values in the dialog window
sets font size
Here is the is the updated version of the script: https://disk.yandex.ru/d/0pcpLDGrfQKMJA
It took me about 15 minutes to do this:
But I had to to split some complex paths with a Knife tool. Sometimes the script throws a some mystical error. I've just selected another set of paths an run the scripts again and again.
It is not a final result but it's close. I think it's much faster that to do it manually.
It can be done with script to some degree. It will work fine for simply forms. But for complicated forms it will be too hard to calculate where you need to put all numbers and how many number will be enough.
But I saw scripts that can fill any form with any symbols. So it's possible to fill any form with numbers, I think, technically.
Of course, if you aren't a seasoned coder it makes no sense to try to do it at home. You need a pro (not even me).
And I see another very simply options as well:
It doesn't even need a script. What do you think?

Issue updating labview waveform chart

I have a labview program where I am collecting data at 2 Hz. I have 8 channels of data I need to plot on a waveform chart. However, due to the program needing to be ran for long periods of time, I run into issues with memory and storing all the data on the chart. I would like to have it be a user input update frequency, but I cannot figure out how to do it. I tried passing the data in through a loop, but it would never execute.
To paint a clearer picture, I want to plot every other data point or further in between. I don't need all the data points on the plot.
You can utilize the master-slave code setup and have an event triggered update. If you need to you can create a global var file to store your data, then when you trigger the update it will read it from that.
I tend to always separate the UI in labview into its' own thread this way and it works well for what you're describing.

Oxyplot: IsValidPoint on realtime LineSerie

I've been using oxyplot for a month now and I'm pretty happy with what it delivers. I'm getting data from an oscilloscope and, after a fast processing, I'm plotting it in real time to a graph.
However, if I compare my application CPU usage to the one provided by the oscilloscope manufacturer, I'm loading a lot more the CPU. Maybe they're using some gpu-based plotter, but I think I can reduce my CPU usage with some modifications.
I'm capturing 10.000 samples per second and adding it to a LineSeries. I'm not plotting all that data, I'm decimating it to a constant number of points, let's say 80 points for a 20 secs measure, so I have 4 points/sec while totally zoomed out and a bit more detail if I zoom in to a specific range.
With the aid of ReSharper, I've noticed that the application is calling a lot of times (I've 6 different plots) the IsValidPoint method (something like 400.000.000 times), which is taking a lot of time.
I think the problem is that, when I add new points to the series, it checks for every point if it is a valid point, instead of the added values only.
Also, it spends a lot of time in the MeasureText/DrawText method.
My question is: is there a way to override those methods and to adapt it to my needs? I'm adding 10.000 new values each second, but the first ones remain the same, so there's no need for re-validate them. Also, the text shown doesn't change.
Thank you in advance for any advice you can give me. Have a good day!

labview - buffer data then save to excel file

My question is with respect to a labVIEW VI (2013), I am trying to modify. (I am only just learning to use this language. I have searched the NI site and stackoverflow for help without success, I suspect I am using the incorrect key words).
My VI consists of a flat sequence one pane of which contains a while loop where integer data is collected from a device and displayed on a graph.
I would like to be able to be able to buffer this data and then send it to disk when a preset number of samples have been collected. My attempts so far result in only the last record being saved.
Specifically I need to know how to save the data in a buffer (array) then when the correct number of samples are captured save it all to disk (saving as it is captured slows the process down to much).
Hope the question is clear and thanks very much in advance for any suggestions.
Tom
Below is a simple circular-buffer that holds the most recent 100 readings. Each time the buffer is refilled, its contents are written to a text file. Drag the image onto a VI's block diagram to try it out.
As you learn more about LabVIEW and as your performance and multi-threaded needs increase, consider reading about some of the LabVIEW design patterns mentioned in the other answers:
State machine: http://www.ni.com/tutorial/7595/en/
Producer-consumer: http://www.ni.com/white-paper/3023/en/
I'd suggest to split the data acquisition and the data saving in two different loops using a producer/consumer design pattern..
Moreover if you need a very high throughput consider using TDMS file format.
Have a look here for an overview: http://www.ni.com/white-paper/3727/en/
Screenshot will definitely help. However, some things are clear:
Unless you are dealing with very high volume of data, very slow hard drives or have other unusual requirements, open the file before your while loop, write to it every time you acquire a sample (leaving buffering to the OS), and close it afterwards.
If you decide you need to manage buffering on your own, you can use queues. See this example: https://decibel.ni.com/content/docs/DOC-14804 for reference (they stream data from disk, buffering it in the queue, but it is the same idea)
My VI consists of a flat sequence one pane of which
Substitute flat sequence for finite state machine (e.g. http://forums.ni.com/t5/LabVIEW/Ending-a-Flat-Sequence-Inside-a-case-structure/td-p/3170025)

How to Periodically Updating Labview chart when collecting multi channel data at a high rate

Looking for some help with a Labview data collection program. If I could collect 2ms of data at 8kHz (gives 16 data points) per channel (I am collecting data on 4 analog channels with an National Instruments data acquisition board). The DAQ-MX collection task gives a 1D array of 4 waveforms.
If I don't display the data I can do all my computation time is about 2ms and it is OK if the processing loop lags a little behind the collection loop. Updating the chart in Labview's front panel introduces an unacceptable delay. We don't need to update the display very quickly probably at 5-10Hz would be sufficient. But I don't know how to set this up.
My current Labview VI has three parallel loops
A timed-loop for data collection
A loop for analysis and processing
A low priority loop for caching data to disk as a TDMS file
Data is passed from the collection loop to the other loops using a queue. Labview examples gave me some ideas but I am stuck.
Any suggestions, references, ideas would be appreciated.
Thanks
Azim
Follow Up Question
eaolson suggests that I re-sample the data for display purposes. The data coming from the DAQ-MX read is a one dimensional array of waveforms. So I would need to somehow build or concatenate the waveform data for each channel. And then re-sample the data before updating the front panel chart. I suppose the best approach would be to queue the data and in a display loop dequeue the stack build and re-sample the data based on screen resolution and then update the chart. Would there be any other approach. I will look on
(NI Labview Forum)[http://forums.ni.com/ni/board?board.id=170] for more information as suggetsted by eaolson.
Updates
changed acceptable update rate for graphs to 5-10Hz (thanks Underflow and eaolson)
disk cache loop is a low priority one (thanks eaolson)
Thanks for all the responses.
Your overall architecture description sounds solid, but... getting to 30Hz for any non-trivial graph is going to be challenging. Make sure you really need that rate before trying to make it happen. Optimizing to that level might take some time.
References that should be helpful:
You can defer panel updates. This keeps the front panel from refreshing until you're ready for it to do so, allowing you to buffer data in the background, and only draw it occasionally.
You should know about (a)synchronous display. This option allows some control over display rates.
There is some general advice available about speeding execution.
There is a (somewhat dated) report on execution speed on the LAVA forums. Googling around the LAVA forums is a great idea if you need to optimize your speed.
Television updates at about 30 Hz. Any more than that is faster than the human eye can see. 30 Hz should be at the maximum update rate you should consider for a display, not the starting point. Consider an update rate of 5-10 Hz.
LabVIEW charts append the most recent data to the historical data they store and display all the data at once. At 8 kHz, you're acquiring at least 8000 data points per channel per second. That means the array backing that graph has to continuously be resized to hold the new data. Also, even if your graph is 1000 pixels across, that means you're displaying 8 data points per screen pixel. There's not usually any reason to display any more than one data point per pixel. If you really need fast update rates, plot less data. Create an array to hold the historical data and plot only every Nth data point, where N is chosen so you're plotting, say, only a few hundred points.
Remember that your loops can run at different rates. It may be satisfactory to run the write-to-disk loop at a much lower frequency than the data collection rate, maybe every couple of seconds.
Avoid property nodes if you can. They run in the UI thread, which is slower than most other execution.
Other than that, it's really hard to offer a lot of substantial advice without seeing code or more specifics. Consider also asking your question at the NI LabVIEW forums. There are a lot of helpful people there.