I am making a parkour map where you run over water to temporarily summon ice beneath your feet, however I don't now how to temporarily run a tick function.
Simple. When the player goes to run across the desired area, give them frost walker boots. Have them either step on a pressure plate, or open a door or something next to an observer that is linked to a command block that uses the command /gamerule randomTickSpeed <desired tick speed>. (Remove the <>.) I don't know what you want the tick speed to be, but I do warn you that high values might cause lag. Do some experimenting of your own by simply using the command above while looking at some wheat. The higher the tick speed, the faster it grows. After the player has run across the area, the ice should be gone in an instant.
Related
I am working on an LLC converter project. So I need PWM signals with variable frequency. I mean I need too change frequency real time. For example frequency modulation 40kHZ-80kHZ. Can anyone give me an idea? Which timer mode I have to use ? Thanks..
Its a little tricky to answer your question when you dont state the exact hardware you're working with. Seeing your tags i will assume its a member of the the STM32 family.
STM standard timers have registers you usually dont need to interface directly. Hal does that for you. However as far as i am aware Hal does not support such functionalities. The standard STM32 Timer has a TIMx_ARR and a TIMx_CCRn register. These hold some of the configuration necessary for PWM generation. You should be able to change your frequency by adjusting the ARR register and the Duty-Cycle by adjusting the CCRn register.
Be carful with that approach tho as it usually has no inbuilt protection. You will not damage your device but it is very easy to produce unintended behavior.
You also need to consider the prescaler values and the general configuration of your timer.
For detailed information refer to the Chapter: GPTIM in the reference Manual of your device as i can not give you a more detailed description with the little information you have provided.
As far as I understood from your question and follow-up comments, you want a constant duty cycle (~50%), but you want variable fequency, as well as phase shift. That is totally doable, and you can change the values on the fly, but for the phase shift, I would suggest using two timers. One master, one slave.
Idea:
Master slave controls the phase shift. The period of master slave is equal to the period of the final waveform. It goes from 0 to its ARR, and at some point there is the phase shift value in Compare register, which flips the output of master from LOW to HIGH on its way from 0 to ARR.
The slave is activated by master output change from LOW to HIGH, runs for one period, which is equal to the period of the master (ARR). It outputs PWM to some pin. Once it reaches ARR, it stops (only for master to start it again). Obviously, you need to adjust Compare register for PWM output to keep the duty cycle constant.
I made some crude illustration of what I mean, because discussing timers with text only can be a little (very) tricky. Paint skills 10/10:
How to adjust stuff:
Adjust frequency (period length) by changing ARR of both timers (it's always the same), if you want to keep duty cycle, you will need to immediately adjust compare value to ARR/2 (for ~50% duty cycle) of the slave timer. Make sure the compare value of the phase shifter master is below the ARR if you reduce ARR, otherwise the slave will never get triggered.
Adjust phase shift by changing compare value of the master timer between 0 and ARR.
Additional notes:
The master timer is configured to have TRGO (trigger output, master feature) on switch from LOW to HIGH of "compare".
The slave timer is in one pulse mode (OPM), meaning it disables itself after a single period. It will be reactivated by the next master's phase shift pulse (compare HIGH).
The master's signal is supposed to do reset and activation of the timer (resets CNT) (there is a list of modes - what TRGI trigger input does to the slave timer). Resetting the timer will load new values into ARR (see next point).
Both master and slave have ARR buffer enabled. This will allow you to change ARR values, but the changes take effect only when the current cycle ends. This will prevent jitter while changing period length and/or phase shift.
The slave timer is in PWM1 or PWM2 mode, depending on whether you want the first part of the output aveform to be LOW or HIGH, that's all the difference.
Helpful example from me:
I have written an implementation of master/slave timers with them activating each other differently purely on registers and with every line of code commented. I was a little new to it all (which shows in the structure of the project), it was literally my first experiment with timers after studying the timers in the reference manual for days, but I tried my best. I have a description of what I do in main.c. You may find it helpful. Note that the timers with the same numbers are similar or even identical across various STM32 devices, so my code is likely portable down to copy-paste into your code (which I'm totally OK with if you or anyone does it). Here is a link to main.c on my GitHub. I also have oscilloscope screenshots there.
I have a question if there is a way to keep robot pepper to continuously stay in one pose. For example, to always have its arm spread.
With NAOqi 2.9, you can make an animation that lasts very long with the desired pose at the start and at the end.
With NAOqi 2.5, you can make an animation with a single frame that loops (with a goToFrame towards at the end), but it will not prevent other commands to interfere with it.
Note that leaving the arms wide spread makes the shoulders warm, so the robot may have difficulties keeping the pose. Try to set the positions of the motor cleverly to reduce the effort.
You can either turn off autonomous mode, or if you want to keep it on you can use the motor stiffness box in choregraphe but it will make the robot warm up
you can stop autonomous movement or try making a timeline with the fps of 0
So my project is an online RTS (real-time strategy) game (VB.NET) where - after a matchmaking server matched 2 players - one player is assigned as host and the other as client in a socketcommunication (TPC).
My design is that server and client only record and send information about input, and the game takes care of all the animations/movements according to the information from the inputs.
This works well, except when it comes to having both the server's and client's games run exacly at the same speed. It seems like the operations in the games are handled at different speeds, regardless if I have both client and server on the same computer, or if I use different computers as server/client. This is obviously crucial for the game to work.
Since the game at times have 300-500 units, I thought it would be too much to send an entire gamestate from server/client 10 times a second.
So my questions are:
How to synchronize the games while sending only inputs? (if possible)
What other designs are doable in this case, and how do they work?
(in VB.NET i use timers for operations, such that every 100ms(timer interval) a character moves and changes animation, and stuff like that)
Thanks in advance for any help on this issue, my project really depends on it!
Timers are not guaranteed to tick at exactly the set rate. The speed at which they tick can be affected by how busy the system and, in particular, your process are. However, while you cannot get a timer to tick at an accurate pace, you can calculate, exactly, how long it has been since some particular point in time, such as the point in time when you started the timer. The simplest way to measure the time, like that, is to use the Stopwatch class.
When you do it this way, that means that you don't have a smooth order of events where, for instance, something moves one pixel per timer tick, but rather, you have a path where you know something is moving from one place to another over a specified span of time and then, given your current exact time according to the stopwatch, you can determine the current position of that object along that path. Therefore, on a system which is running faster, the events would appear more smooth, but on a system which is running slower, the events would occur in a more jumpy fashion.
I am wondering why post-layout simulations for digital designs take a long time?
Why can't software just figure out a chip's timing and model the behavior with a program that creates delays with sleep() or something? My guess is that sleep() isn't accurate enough to model hardware, but I'm not sure.
So, what is it actually doing that takes so long?
Thanks!
Post layout simulations (in fact - anything post synthesis) will be simulating gates rather than RTL, and there's a lot of gates.
I think you've got your understanding of how a simulator works a little confused. I say that because a call like sleep() is related to waiting for time as measured by the clock on the wall, not the simulator time counter. Simulator time advances however quickly the simulator runs.
A simulator is a loop that evaluates the system state. Each iteration of the loop is a 'time slice' e.g. what the state of the system is at time 100ns. It only advances from one time slice to the next when all the signals in it have reached a steady state.
In an RTL or untimed gate simulation, most evaluation of signals happens in 'zero time', which is to say that the effect of evaluating an assignment happens in the same time slice. The one exception tends to be the clock, which is defined to change at a certain time and it causes registers to fire, which causes them to change their output, which causes processes, modules, assignments which have inputs from registers to re-evaluate, which causes other signals to change, which causes other processes to re-evaluate, etc, etc, etc.... until everything has settled down, and we can move to the next clock edge.
In a post layout simulation, with back-annotated timing, every gate in the system has a time from input to output associated with it. This means nothing happens in 'zero time' any more. The simulator now has put the effect of every assignment on a list saying 'signal b will change to 1 at time 102.35ns'. Every gate has different timing. Every input on every gate will have different timing to the output. This means that a back-annotated simulation has to evaluate lots and lots of time slices as signals are changing state at lot's of different times. Not just when the clock changes. There probably isn't much happening in each slice, but there's lots of them.
...and I've only talked about adding gate timing. Add wire timing and things get even more complex.
Basically there's a whole lot more to worry about, and so the sims get slower.
I need to implement the following feature for my device running embedde linux on a 200mhz MIPS cpu:
1) if a reset button is pressed and held for less then a second - proceed with reboot
2) if a reset button is pressed and held for at least 3 sec. - restore the system's configuration with default values from NVRAM and then reboot.
I'm thinking of two ways:
1) a daemon that constantly polls the button's state with proper timings via GPIO ioctls
(likely too big overhead, lots of context switching ?)
2) simple char driver polling the button, measuring timings and reporting the state, for example, via /proc to user space where daemon or a shell script can check and do what's required.
And for both cases I have no idea how to measure the time :(
What would you suggest/recommend?
You have to implement those in hardware. The purpose of the "restore defaults from NVRAM" is to restore a so-called "bricked" device.
For example, what if an NVRAM seting is modified (cosmic ray?) such that the device cannot boot? In that case, your proposed button-polling daemon will never execute.
For the one-second held reboot, use an RC (resistor + capacitor) circuit to "debounce" the button press. Select an RC time constant which is appropriate for the one second delay. Use a comparator watching the RC voltage to signal the RESET pin on the MIPS cpu.
For the three-second press functionality (restore NVRAM defaults), you have to do something more complicated, probably.
One possibility is to put a tiny PIC microcontroller into the reset circuit, but only use a microcontroller with fuse (non-erasable) ROM, not NVRAM.
An easier possibility is to have a ROM containing defaults on the same circuit and bus as the NVRAM. A J/K flip-flop latch can become part of your reset circuitry. You'll also need a three-second-tuned RC circuit and comparator. On sub-three-second presses, the flip-flop should latch a 0 output and on three-second-plus presses, the 2nd RC circuit should trigger the comparator after 3 seconds and present a 1 to the J/K latch, which will toggle its output.
The flip-flop output Q will store the single bit telling your circuit whether this reset cycle was subsequent to a three-second push. If so, that output Q is driving the chip select to the NVRAM and Q* is driving the chip select to ROM. (I assume chip select is negative logic on both NVRAM and ROM chips.)
Then when your CPU boots, it will fetch the settings from either the NVRAM or the ROM, depending on the chip select line.
Your boot code can detect that it booted with ROM chip select, and can later reset the J/K flip-flop with a GPIO line. Then the CPU will be able to write good values back into the NVRAM. That unbricks the device, hopefully.
You want to use ROM that is not erasable or reusable. That kind of ROM is the most resistant to static electricity, power supply trouble, and radiation. Radiation is much more present than we generally realize, and the amount of cosmic ray flux is multiplied by taking a device onboard an airliner, for example.
I am not familiar with the MIPS processor and the GPIO/interrupt capabilities of the pin that you could be using but a possible methodology could be as follows.
Configure the input pin as an interrupt input.
When the interrupt fires disable the interrupt and start a short 100ms-ish timer
When the timer triggers check that the button is still pressed (for debounce). If it is not then re-enable the GPIO interrupt and restart, otherwise set the timer to re-trigger after the 3 second timeout.
When the timer triggers this time then if the button is not pressed then do your reboot otherwise reset the system configuration and reboot.
If the pin cannot provide an interrupt then step 1 will be a polling task to look at the input.
The time between the reset button being pressed and the full reset process being run will always be 3 seconds from a debounced button press. In a reset situation this may not be important particularly if as part of step 3 you make it apparent to the user that a reset sequence has started - blank the display for example.
If you want to do this in software, you need to put this in kernel (interrupt) code, rather than in a shell script or daemon. The better approach would be to put this in hardware.
In my experience, the likely reason for resetting the device will be bad user code which has locked or bricked the processor. If the issue is a corruption of memory due to RF energy or something of that nature, you may require hardware or an external (hardened) processor to reflash the device and fix the problem.
In the bad-user-code case, processor interrupts and kernel code should continue to run, while user code may be completely stalled. If you can poll the pin from an interrupt, you stand a much better chance of actually getting the reset you expect. Also, this enables you to do event-driven programming, rather than constantly polling the pin.
One other approach (not to the specs you listed, but a popular method for achieving the same goal) would be to have the startup routines check a GPIO line, and hold a button down when you want to re-initialize the device. On most embedded Linux devices which I've seen, the "Reset" button is wired to a dedicated reset pin on the microcontroller, and not to a GPIO pin. You may have to go with this route, unless you want to start cutting traces.