Hello World, Bye Cruel World - nxc

I am BRAND new at coding, and don't understand. I know this has probably been asked, but I didn't know how to search this specifically.
I am using NXC and a Bricx.
How can I display "Hello World", play a sound, then display "Bye Cruel world" beneath the previous line?
I am actually supposed to use a string message, but idk what that is.
This is what I have, but "Bye Cruel World" is not displaying:
task main()
{
TextOut(10, LCD_LINE4,"Hello World"); //display text at pos x=10, y=4
Wait(SEC_2); // wait 2000 ms, or 2 seconds
PlaySound(SOUND_LOW_BEEP);
Wait(SEC_2); // wait 2000 ms, or 2 seconds
TextOut(10, 16,"Bye Cruel World"); //display text at pos x=10, y=4
}

Related

Issue with Kotlin lambdas halting code using Spoon plug-in by jaredsburrows

I'm attempting to run an espresso test on a simple pre-fabricated Android app using the Spoon plugin to take screenshots and test on multiple devices in Kotlin. I looked at the official Github example but can't seem to get it working. I'm thinking it might be an issue with my understanding of lambda functions in Kotlin, as I've never used them before.
Currently, my code is freezing after the first command in the lambda function.
#Test
fun firstFragmentTest()
{
scenario.scenario.onActivity { activity ->
Log.d("First Fragment Test: ", "Testing to make sure first fragment is accurate")
//checking to see if center text is accurate
onView(withId(R.id.textview_first)).check(matches(withText("Hello first fragment")))
Log.d("First Fragment Test: ", "Center text is accurate")
Spoon.screenshot(activity, "FirstFragment", "ButtonTests", "firstFragmentTest")
//checking to see if button text is accurate
onView(withId(R.id.button_first)).check(matches(withText("Next")))
Log.d("First Fragment Test: ", "Button text is accurate")
//checking to see if top text is accurate
//onView(withId(R.id.FirstFragment)).check(matches(withText("First Fragment")))
Log.d("First Fragment Test: ", "First fragment text is accurate")
}
}
This is my logcat:
2022-05-31 13:43:39.640 2155-2190/com.example.problem2_21 D/Previous Button Test:: Testing functionality of previous button
2022-05-31 13:43:39.643 2155-2155/com.example.problem2_21 I/ViewInteraction: Performing 'single click' action on view view.getId() is <2131230815/com.example.problem2_21:id/button_second>
2022-05-31 13:43:39.972 2155-2190/com.example.problem2_21 D/Previous Button Test:: Previous button pressed
2022-05-31 13:43:39.973 2155-2190/com.example.problem2_21 D/Previous Button Test:: First fragment screen should be displayed
2022-05-31 13:43:39.974 2155-2155/com.example.problem2_21 D/First Fragment Test:: Testing to make sure first fragment is accurate
As you can see, the test stops running after the first line of the lambda. I have multiple test functions that lead into this one, which aren't set-up for Spoon yet and don't use lambda functions, that work fine.
I actually figured it out, in case anyone else has the same issue. I just had to place the lambda with the screenshot after the test code like this:
#Test
fun firstFragmentTest()
{
Log.d("First Fragment Test: ", "Testing to make sure first fragment is accurate")
//checking to see if center text is accurate
onView(withId(R.id.textview_first)).check(matches(withText("Hello first fragment")))
Log.d("First Fragment Test: ", "Center text is accurate")
//checking to see if button text is accurate
onView(withId(R.id.button_first)).check(matches(withText("Next")))
Log.d("First Fragment Test: ", "Button text is accurate")
//checking to see if top text is accurate
//onView(withId(R.id.FirstFragment)).check(matches(withText("First Fragment")))
Log.d("First Fragment Test: ", "First fragment text is accurate")
scenario.scenario.onActivity { activity ->
Spoon.screenshot(activity, "FirstFragment", "ButtonTests", "firstFragmentTest")
}
}

NAudio: How to accurately get the current play location when changing play position using AudioFileReader and WaveOutEvent

I'm creating an application that needs to allow the user to select the play position of an audio file while the file is playing. However, once the position is changed, I'm having trouble identifying the current position.
Here's an example program that shows how I'm going about it.
using NAudio.Utils;
using NAudio.Wave;
using System;
namespace NAudioTest
{
class Program
{
static void Main()
{
var audioFileReader = new AudioFileReader("test.mp3");
var waveOutEvent = new WaveOutEvent();
waveOutEvent.Init(audioFileReader);
waveOutEvent.Play();
while (true)
{
var key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
{
var playLocationSeconds =
waveOutEvent.GetPositionTimeSpan().TotalSeconds;
Console.WriteLine(
"Play location is " + playLocationSeconds + "seconds");
}
else if (key.Key == ConsoleKey.RightArrow)
{
audioFileReader.CurrentTime =
audioFileReader.CurrentTime.Add(TimeSpan.FromSeconds(30));
}
else
{
break;
}
}
}
}
}
Steps to reproduce the problem
Start the program: the audio file starts playing
Press the enter key: the current play time is written to the console
Press the right arrow key: the played audio jumps ahead (presumably) to the expected location
Press the Enter key again: a play time is written to the
console, but it looks to be the amount of time since the audio first
started playing, not the time of the current play position.
I have tried getting the value of AudioFileReader.CurrentTime instead of calling GetPositionTimeSpan on the WaveOutEvent. The problem with this approach is that the AudioFileReader.CurrentTime value proceeds in jumps, presumably because this underlying stream is buffered when used with WaveOutEvent so CurrentTime does not accurately reflect the play position, only the position in the underlying stream.
How do I support arbitrary play positioning yet continue to get an accurate play position current time?
The "CurrentTime" property of your audio file reader is good enough to tell the current position of playback, especially if your latency is not very high. I found the difference between it and waveOutEvent.GetPositionTimeSpan() to be 100-200 ms. at most.
You are indeed using the setter of the CurrentTime property to reposition within the stream. It would be consistent to use the getter to then query the current position as well. If you are concerned with precision, you can use lower latency.
The extension method "GetPositionTimeSpan()" does seem to return the total length of playback so far and not the position within the stream. Admittedly I do not know why this is so.

Video streaming - dealing with header 206 (php to HTML5)

As usual, it never works for me...
I try to stream video and I'm doing very well with the code below:
function send_back_video($video_path)
{
ob_clean();
$mime = "application/octet-stream"; // The MIME type of the file, this should be replaced with your own.
$size = filesize($video_path); // The size of the file
// Send the content type header
header('Content-type: ' . $mime);
// Check if it's a HTTP range request
if(isset($_SERVER['HTTP_RANGE']))
{
// Parse the range header to get the byte offset
$ranges = array_map(
'intval', // Parse the parts into integer
explode(
'-', // The range separator
substr($_SERVER['HTTP_RANGE'], 6) // Skip the `bytes=` part of the header
)
);
// If the last range param is empty, it means the EOF (End of File)
if(!$ranges[1]){
$ranges[1] = $size - 1;
}
// Send the appropriate headers
header('HTTP/1.1 206 Partial Content');
header('Accept-Ranges: bytes');
header('Content-Length: ' . ($ranges[1] - $ranges[0])); // The size of the range
// Send the ranges we offered
header(
sprintf(
'Content-Range: bytes %d-%d/%d', // The header format
$ranges[0], // The start range
$ranges[1], // The end range
$size // Total size of the file
)
);
// It's time to output the file
$f = fopen($video_path, 'rb'); // Open the file in binary mode
$chunkSize = 8192; // The size of each chunk to output
// Seek to the requested start range
fseek($f, $ranges[0]);
// Start outputting the data
while(true)
{
// Check if we have outputted all the data requested
if(ftell($f) >= $ranges[1])
{
break;
}
// Output the data
echo fread($f, $chunkSize);
// Flush the buffer immediately
#ob_flush();
flush();
}
}
else {
// It's not a range request, output the file anyway
header('Content-Length: ' . $size);
// Read the file
#readfile($file);
// and flush the buffer
#ob_flush();
flush();
}
}
"So if it works, what do you want?" - you might ask..
As I stated before, the code does a very good job, BUT there is 1 issue.
If I start to play video, php sends header('HTTP/1.1 206 Partial Content') and this (video) request stays open till EOF, what means, I am FORCED to download entire video even if don't want to (let's say I'm clicking on the next video).
My JS removes video tag (this should ABORT request) and creates new video tag with new link. ABORT part is not happening.
Here you can see GET the example:
1st - initial video get
2nd - play button pressed
3rd - navigate away - new DOM item created and waiting to appear.
So what I'm looking for, is some option to close the connection on navigate away.
I saw youtube does many GETs. As I understand every GET brings new "chunk" when previous "chunk" close to end. It's very good option, but it cancels buffering and I want to be able to buffer video.
P.S.
For sure I'm doing something wrong, I just can't see what.
P.P.S.
I'm not the author of the code.

template formatting - allow readable format

I'm creating some apache velocity templates and formatting the code(intellij Idea) breaks the template, because the text parts of the template are automatically indented.
When there's only a single line of the output text, then the indent is not preserved in the output:
#macro(body)
#if(${someCondition})
Hello
#end
#end
The output is:
Hello
The problem with multiline text output is this:
#macro(body)
#if(${someCondition})
Hello
World
#end
#end
And the output:
Hello
World
Is there some way to strip the indent of the text part based on the sorrounding code? I'd like to avoid ugly formatted templates like this:
#macro(body)
#if(${someCondition})
Hello
World
#end
#end
The code which uses the template:
final VelocityEngine engine = new VelocityEngine();
engine.init();
final VelocityContext context = new VelocityContext();
//put variables into context
engine.evaluate(context, outputWriter, "LOG", inputString);
Maybe there's something I can put into VelolocityContext, but I wasn't able to find it.

Web audio API polyphony - using 2 different gain nodes not working?

I can't seem to create two oscillators with independent gain envelopes.
The code below creates two buttons which each play a sine tone at a different pitch. When I click on the first button, I hear the tone grow in volume as it should. But, when I click the second button, the tone reacts as if it is connected to the gain of the first tone. For example, if I click the second button (turning on the second tone) while the first tone is at volume 1, the second tone will enter at volume 1, even though it is supposed to envelope from 0 to 1 to 0 over the course of 10 seconds.
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
Thanks. A link is below and the code is below that. This is my first post here so let me know if you need anything else. This code must be run in versions of Chrome or Safari that support the web audio api.
http://whitechord.org/just_mod/poly_test.html
WAAPI tests
<button onclick="play()">play one</button>
<button onclick="play2()">play two</button>
<script>
var context;
window.addEventListener('load', initAudio, false);
function initAudio() {
try {
context = new webkitAudioContext();
} catch(e) {
onError(e);
}
}
function play() {
var oscillator = context.createOscillator();
var gainNode = context.createGainNode();
gainNode.gain.value = 0.0;
oscillator.connect(gainNode);
gainNode.connect(context.destination);
oscillator.frequency.value = 700;
gainNode.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator.noteOn(0);
}
function play2() {
var oscillator2 = context.createOscillator();
var gainNode2 = context.createGainNode();
gainNode2.gain.value = 0.0;
oscillator2.connect(gainNode2);
gainNode2.connect(context.destination);
oscillator2.frequency.value = 400;
gainNode2.gain.linearRampToValueAtTime(0.0, 0); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, 10); // envelope
oscillator2.noteOn(0);
}
/* error */
function onError(e) {
alert(e);
}
</script>
</body>
</html>
Can I only have one gain node per audio context? Or is there some other reason that the gains of these oscillators are being connected? In addition, after I play the tones once, I cannot play them again, which makes me especially think that I am doing something wrong. : )
You can have as many gain nodes as you want (this is how you could achieve mixing bus-like setups, for example), so that's not the problem. Your problem is the following:
Remember that the second parameter to linearRampToValueAtTime() is time in the same time coordinate system as your context.currentTime.
And your context.currentTime is always moving forward in real time, so all your ramps, curves, etc. should be calculated relative to it.
If you want something to happen 4 seconds from now, you'd pass context.currentTime + 4 to the Web Audio API function.
So, change all your calls linearRampToValueAtTime() in your code, so that they look like:
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime); // envelope
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope
gainNode2.gain.linearRampToValueAtTime(0.0, context.currentTime + 10); // envelope
And that should take care of your issues.
BTW you have a stray double quote in your BODY opening markup tag.
Ask Will Conklin
gainNode2.gain.linearRampToValueAtTime(0.1, context.currentTime + 5); // envelope