How to change from one musicSequence to another without time delay - objective-c

I'm playing a MIDI sequence via MusicPlayer which I loaded from a MIDI file and I want to change the sequence to another while playback.
When I try this:
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
it stops the playback. So I start it back again and set the time with
MusicPlayerSetTime(_player, currentTime);
so it plays again where the previous sequence stopped, but there is a little delay.
I've tried to add the time interval to currentTime, which I got by obtaining the time before stopping and after starting again. But there is still a delay.
I was wondering if there is an alternative to stopping -> changing sequence -> starting again.

You definitely need to manage the AUSamplers if you are adding and removing tracks or switching sequences. It probably is cleaner to dispose of the AUSampler and create a new one for each new track but it is also possible to 'recycle' AUSamplers but that means you will need to keep track of them.
Managing AUSamplers means that when you are no longer using an instance of one (for example if you delete or replace a MusicTrack), you need to disconnect it from the AUMixer instance, remove it from the AUGraph instance, and then update the AUGraph.
There are lots of ways to handle all this. For convenience in keeping track of AUSampler instances' bus number, sound font loaded and some other stuff, I use a subClass of NSObject named SamplerAudioUnitto contain all the needed properties and methods. Same for MusicTracks - I have a Track class - but this may not be needed in your project.
The gist though is that AUSamplers need to be managed for performance and memory. If an instance is no longer being used it should be removed and the AUMixer bus input freed up.
BTW - I check the docs and there is apparently no technical limit to the number of mixer busses - but the number does need to be specified.
// this is not cut and paste code - just an example of managing the AUSampler instance
- (OSStatus)deleteTrack:(Track*) trackObj
{
OSStatus result = noErr;
// turn off MP if playing
BOOL MPstate = [self isPlaying];
if (MPstate){
MusicPlayerStop(player);
}
//-disconnect node from mixer + update list of mixer buses
SamplerAudioUnit * samplerObj = trackObj.sampler;
UInt32 busNumber = samplerObj.busNumber;
result = AUGraphDisconnectNodeInput(graph, mixerNode, busNumber);
if (result) {[self printErrorMessage: #"AUGraphDisconnectNodeInput" withStatus: result];}
[self clearMixerBusState: busNumber]; // routine that keeps track of available busses
result = MusicSequenceDisposeTrack(sequence, trackObj.track);
if (result) {[self printErrorMessage: #"MusicSequenceDisposeTrack" withStatus: result];}
// remove AUSampler node
result = AUGraphRemoveNode(graph, samplerObj.samplerNode);
if (result) {[self printErrorMessage: #"AUGraphRemoveNode" withStatus: result];}
result = AUGraphUpdate(graph, NULL);
if (result) {[self printErrorMessage: #"AUGraphUpdate" withStatus: result];}
samplerObj = nil;
trackObj = nil;
if (MPstate){
MusicPlayerStart(player);
}
// CAShow(graph);
// CAShow(sequence);
return result;
}

Because
MusicPlayerSetSequence(_player, sequence);
MusicSequenceSetAUGraph(sequence, _processingGraph);
will still cause the player to stop, it is still possible to hear a little break.
So instead of updating the musicSequence, i went ahead and changed the content of the tracks instead, which won't cause any breaks:
MusicTrack currentTrack;
MusicTrack currentTrack2;
MusicSequenceGetIndTrack(musicSequence, 0, &currentTrack);
MusicSequenceGetIndTrack(musicSequence, 1, &currentTrack2);
MusicTrackClear(currentTrack, 0, _trackLen);
MusicTrackClear(currentTrack2, 0, _trackLen);
MusicSequence tmpSequence;
switch (number) {
case 0:
tmpSequence = musicSequence1;
break;
case 1:
tmpSequence = musicSequence2;
break;
case 2:
tmpSequence = musicSequence3;
break;
case 3:
tmpSequence = musicSequence4;
break;
default:
tmpSequence = musicSequence1;
break;
}
MusicTrack tmpTrack;
MusicTrack tmpTrack2;
MusicSequenceGetIndTrack(tmpSequence, 0, &tmpTrack);
MusicSequenceGetIndTrack(tmpSequence, 1, &tmpTrack2);
MusicTimeStamp trackLen = 0;
UInt32 trackLenLenLen = sizeof(trackLen);
MusicTrackGetProperty(tmpTrack, kSequenceTrackProperty_TrackLength, &trackLen, &trackLenLenLen);
_trackLen = trackLen;
MusicTrackCopyInsert(tmpTrack, 0, _trackLen, currentTrack, 0);
MusicTrackCopyInsert(tmpTrack2, 0, _trackLen, currentTrack2, 0);
No disconnection of nodes, no updating the graph, no stopping the player.

Related

NAudio: Outputting an audio stream at high sample rate results in dropped and choppy sound

I am reading samples from a software defined radio device that delivers two channels of Int16 (shorts) that represent audio. I am trying to send this stream to an audio output device on a PC.
I already have this working with a different set of tools but would like to use NAudio in its place since there are other capabilities I could use.
The code starts with the software defined radio method placing an array of shorts (i.e. int16) arranged as left channel then right channel. The array size is 65536, the sample rate is 192,000.
When the packet is received it is place on a Blocking collection to be picked up by a thread that sends out the audio.
So for the purpose of this question, the thread starts by reading the blocking collection which returns 65536 shorts.
StartStream = true;
waveformat = new WaveFormat(192000, 16, 2);
bs = new BufferedWaveProvider(waveformat);
bs.BufferLength = 65536 * 4;
_waveOut.DeviceNumber = 4;
while (true)
{
dspPacket = dsp_Queue.Take(); //take int16[] of queue
int j = 0;
try
{
for (int i = 0; i < 32768; i += 2)
{
byte[] qDataByte = BitConverter.GetBytes(dspShort[i]);
dspBytes[j] = qDataByte[0];
dspBytes[j + 1] = qDataByte[1];
j += 2;
}
if (startStream)
{
bs.AddSamples(dspBytes, 0, 65536);
_waveOut.Init(bs);
_waveOut.Play();
startStream = false;
}
else
{
bs.AddSamples(dspBytes, 0, 65536);
}
}
catch (Exception ex)
{
}
// SendData(dspPacket); //if this is uncommented, everything works correctly with the old method.
}
Now I can feed the output of the waveout device to a virtual audio cable which then feeds it into some spectrum analyser software and then this is what you will see with here:
When I use the old routines, I get a solid line and the sound does not break up.
So, am I doing something wrong here?
Thanks, Tom

No r/w bit made available to firmware by I2C peripheral of STM32F40x chips

I was wondering if anyone has found a way to determine the intention of a master communicating with an stm32f40x chip? From the perspective of the firmware on the stm32f40x chip, the ADDRess sent by the master is not available, and the r/w bit (bit 0 of the address) contained therein is also not available. So how can I prevent collisions? Has anyone else dealt with this? If so what techniques did you use? My tentative solution is below for reference. I delayed any writes to the DR data register until the TXE interrupt occurs. I thought at first this would be too late, and a byte of garbage would be clocked out, but it seems to be working.
static inline void LLEVInterrupt(uint16_t irqSrc)
{
uint8_t i;
volatile uint16_t status;
I2CCBStruct* buffers;
I2C_TypeDef* addrBase;
// see which IRQ occurred, process accordingly...
switch (irqSrc)
{
case I2C_BUS_CHAN_1:
addrBase = this.addrBase1;
buffers = &this.buffsBus1;
break;
case I2C_BUS_CHAN_2:
addrBase = this.addrBase2;
buffers = &this.buffsBus2;
break;
case I2C_BUS_CHAN_3:
addrBase = this.addrBase3;
buffers = &this.buffsBus3;
break;
default:
while(1);
}
// ...START condition & address match detected
if (I2C_GetITStatus(addrBase, I2C_IT_ADDR) == SET)
{
// I2C_IT_ADDR: Cleared by software reading SR1 register followed reading SR2, or by hardware
// when PE=0.
// Note: Reading I2C_SR2 after reading I2C_SR1 clears the ADDR flag, even if the ADDR flag was
// set after reading I2C_SR1. Consequently, I2C_SR2 must be read only when ADDR is found
// set in I2C_SR1 or when the STOPF bit is cleared.
status = addrBase->SR1;
status = addrBase->SR2;
// Reset the index and receive count
buffers->txIndex = 0;
buffers->rxCount = 0;
// setup to ACK any Rx'd bytes
I2C_AcknowledgeConfig(addrBase, ENABLE);
return;
}
// Slave receiver mode
if (I2C_GetITStatus(addrBase, I2C_IT_RXNE) == SET)
{
// I2C_IT_RXNE: Cleared by software reading or writing the DR register
// or by hardware when PE=0.
// copy the received byte to the Rx buffer
buffers->rxBuf[buffers->rxCount] = (uint8_t)I2C_ReadRegister(addrBase, I2C_Register_DR);
if (RX_BUFFER_SIZE > buffers->rxCount)
{
buffers->rxCount++;
}
return;
}
// Slave transmitter mode
if (I2C_GetITStatus(addrBase, I2C_IT_TXE) == SET)
{
// I2C_IT_TXE: Cleared by software writing to the DR register or
// by hardware after a start or a stop condition or when PE=0.
// send any remaining bytes
I2C_SendData(addrBase, buffers->txBuf[buffers->txIndex]);
if (buffers->txIndex < buffers->txCount)
{
buffers->txIndex++;
}
return;
}
// ...STOP condition detected
if (I2C_GetITStatus(addrBase, I2C_IT_STOPF) == SET)
{
// STOPF (STOP detection) is cleared by software sequence: a read operation
// to I2C_SR1 register (I2C_GetITStatus()) followed by a write operation to
// I2C_CR1 register (I2C_Cmd() to re-enable the I2C peripheral).
// From the reference manual RM0368:
// Figure 163. Transfer sequence diagram for slave receiver
// if (STOPF == 1) {READ SR1; WRITE CR1}
// clear the IRQ status
status = addrBase->SR1;
// Write to CR1
I2C_Cmd(addrBase, ENABLE);
// read cycle (reset the status?
if (buffers->txCount > 0)
{
buffers->txCount = 0;
buffers->txIndex = 0;
}
// write cycle begun?
if (buffers->rxCount > 0)
{
// pass the I2C data to the enabled protocol handler
for (i = 0; i < buffers->rxCount; i++)
{
#if (COMM_PROTOCOL == COMM_PROTOCOL_DEBUG)
status = ProtProcRxData(buffers->rxBuf[i]);
#elif (COMM_PROTOCOL == COMM_PROTOCOL_PTEK)
status = PTEKProcRxData(buffers->rxBuf[i]);
#else
#error ** Invalid Host Protocol Selected **
#endif
if (status != ST_OK)
{
LogErr(ST_COMM_FAIL, __LINE__);
}
}
buffers->rxCount = 0;
}
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_AF) == SET)
{
// The NAck received from the host on the last byte of a transmit
// is shown as an acknowledge failure and must be cleared by
// writing 0 to the AF bit in SR1.
// This is not a real error but just how the i2c slave transmission process works.
// The hardware has no way to know how many bytes are to be transmitted, so the
// NAck is assumed to be a failed byte transmission.
// EV3-2: AF=1; AF is cleared by writing ‘0’ in AF bit of SR1 register.
I2C_ClearITPendingBit(addrBase, I2C_IT_AF);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_BERR) == SET)
{
// There are extremely infrequent bus errors when testing with I2C Stick.
// Safer to have this check and clear than to risk an
// infinite loop of interrupts
// Set by hardware when the interface detects an SDA rising or falling
// edge while SCL is high, occurring in a non-valid position during a
// byte transfer.
// Cleared by software writing 0, or by hardware when PE=0.
I2C_ClearITPendingBit(addrBase, I2C_IT_BERR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_OVR) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_OVR);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
if (I2C_GetITStatus(addrBase, I2C_IT_TIMEOUT) == SET)
{
// Check for other errors conditions that must be cleared.
I2C_ClearITPendingBit(addrBase, I2C_IT_TIMEOUT);
LogErr(ST_COMM_FAIL, __LINE__);
return;
}
// a spurious IRQ occurred; log it
LogErr(ST_INV_STATE, __LINE__);
}
I'm not shure if I understand you. May you should provide more information or an example about what you would like to do.
Maybe this helps:
My experience is, that in many I2C implementations the R/W-Bit is used together with the 7-bit-address, so most of the times, there is no additional function to set or reset the R/W-Bit.
So that means all addresses beyond 128 should be used to read data from slaves and all addresses over 127 should be used to write data to slaves.
There seems to be no way to determine if the transaction initiated by receipt of the address is a read or a write even though the hardware know whether the LSbit is set or clear. The intention of the master will only be known once the RXNE or TXE interrupt/bit occurs.

A video from GPUImageMovie has a red frame at the beginning

Thank you for seeing this question.
I create a movie from GPUImageMovieComposition and GPUImageWriter, and sometimes (5% ~ 10 %) the movie has red frames at the beginning.
Please teach me why this phenomenon occur.
I use AVFileTypeMPEG4 as a sample filetype but AVFileTypeQuickTimeMovie is also same.
_movieFile = [[GPUImageMovieComposition alloc] initWithComposition:composition andVideoComposition:videoComposition andAudioMix:nil];
_movieFile.playAtActualSpeed = YES;
_movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:processedMovieURL
size:CGSizeMake(1280.0, 720.0)
fileType:AVFileTypeMPEG4
outputSettings:videoSetting];
_movieWriter.shouldPassthroughAudio = NO;
[_movieWriter setVideoInputReadyCallback:nil];
[_movieWriter setHasAudioTrack:YES audioSettings:audioSetting];
[_movieFile addTarget:_movieWriter];
_movieFile.audioEncodingTarget = _movieWriter;
[_movieFile enableSynchronizedEncodingUsingMovieWriter:_movieWriter];
[_movieWriter startRecording];
[_movieFile startProcessing];
SOLUTION
Finally I could find the way to solve... but not perfect way...
I modified
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
at GPUImageMovie.m little bit.
When currentSampleTime is set, all red frame has currentSampleTime.value == 0
so I avoided setting currentSampleTime when currentSampleTime.value == 0
Here are some codes which I actually used.
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
if(currentSampleTime.value != 0){
[currentTarget newFrameReadyAtTime:currentSampleTime atIndex:targetTextureIndex];
}
}
In my case there was not red but just blank frames in the beginning of video recorded by GPUImageMovieWriter.
The problem was that the audio samples appears earlier then the video frames and assetWriter session has been started earlier then first video frames became available.
I've fix that by modifying processAudioBuffer function by replacing this code
if (CMTIME_IS_INVALID(startTime))
{
runSynchronouslyOnContextQueue(_movieWriterContext, ^{
if ((audioInputReadyCallback == NULL) && (assetWriter.status != AVAssetWriterStatusWriting))
{
[assetWriter startWriting];
}
[assetWriter startSessionAtSourceTime:currentSampleTime];
startTime = currentSampleTime;
});
}
on this
if (CMTIME_IS_INVALID(startTime))
{
NSLog(#"0: Had to drop an audio frame: %#", CFBridgingRelease(CMTimeCopyDescription(kCFAllocatorDefault, currentSampleTime)));
if (_shouldInvalidateAudioSampleWhenDone)
{
CMSampleBufferInvalidate(audioBuffer);
}
CFRelease(audioBuffer);
return;
}
The fix is applicable to the latest versions of GPUImage from May 27 to Jul 01, 2014.
The only solution I found so far that saved me from this issue is reverting back to commit e98cc813b.
I figured it out by using git bisect and performing batch of processing tests on the same video.
This commit already had all required functionality for my project, and only few changes were required to make it more stable. You can take a look at the changes here: https://github.com/crazyjooe/GPUImage.
Besides, after lots of testing, I can say that processing itself became much more stable, especially in terms of cancelling.
I wonder how video processing became less reliable and stable after all changes introduced.

Using a Filter Audio Unit Effect in iOS5

I'm trying to use a remote IO connection and route the audio input through the built in filter effect (iOS 5 only) and then back out of the hardware. I can make it route straight from the input to the output but I can't get the filter to work. I'm not sure whether it's the filter Audio Unit or the routing that I've got wrong.
This bit is just my attempt at setting up the filter and changing the routing so that the data is processed by it.
Any help is appreciated.
// ******* BEGIN FILTER ********
NSLog(#"Begin filter");
// Creates Audio Component Description - Output Filter
AudioComponentDescription filterCompDesc;
filterCompDesc .componentType = kAudioUnitType_Effect;
filterCompDesc.componentSubType = kAudioUnitSubType_LowPassFilter;
filterCompDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
filterCompDesc.componentFlags = 1;
filterCompDesc.componentFlagsMask = 1;
// Create Filter Unit
AudioUnit lpFilterUnit;
AudioComponent filterComponent = AudioComponentFindNext(NULL, &filterCompDesc);
setupErr = AudioComponentInstanceNew(filterComponent, &lpFilterUnit);
NSAssert(setupErr == noErr, #"No instance of filter");
AudioUnitElement bus2 = 2;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Output, bus2, &oneFlag, sizeof(oneFlag));
AudioUnitElement bus3 = 3;
setupErr = AudioUnitSetProperty(lpFilterUnit, kAudioUnitSubType_LowPassFilter, kAudioUnitScope_Input, bus3, &oneFlag, sizeof(oneFlag));
// ******** END FILTER ******** //
AudioUnitConnection hardInToLP;
hardInToLP.sourceAudioUnit = remoteIOunit;
hardInToLP.sourceOutputNumber = 1;
hardInToLP.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
AudioUnitConnection LPToHardOut;
LPToHardOut.sourceAudioUnit = lpFilterUnit;
LPToHardOut.sourceOutputNumber = 1;
LPToHardOut.destInputNumber = 3;
setupErr = AudioUnitSetProperty (
remoteIOunit, // connection destination
kAudioUnitProperty_MakeConnection, // property key
kAudioUnitScope_Input, // destination scope
bus3, // destination element
&hardInToLP, // connection definition
sizeof (hardInToLP)
);
/*
// Sets up the Audio Units Connection - new instance called connection
AudioUnitConnection connection;
// Connect Audio Input's out to Audio Out's in
connection.sourceAudioUnit = remoteIOunit;
connection.sourceOutputNumber = bus1;
connection.destInputNumber = bus0;
setupErr = AudioUnitSetProperty(remoteIOunit, kAudioUnitProperty_MakeConnection, kAudioUnitScope_Input, bus0, &connection, sizeof(connection));
*/
NSAssert(setupErr == noErr, #"No RIO connection");
A couple things going on here:
You're gonna help yourself a lot if you do an assert (or some sort of check-error-and-log-it) after every call that can return an OSStatus. That way you'll figure out how far you're getting. Probably also want to log the actual OSStatus value when it's != noErr, and then look it up (start in "Audio Unit Component Services Reference" in Xcode documentation viewer).
After you create the filter AudioUnit, I don't get what you're doing with the AudioUnitSetProperty() calls. The second parameter should be the name of a property (something that starts with kAudioUnitProperty...). That's almost certainly returning an error right there.
remoteIOunit only has two buses, and they have special meanings. bus 1 is input from the mic, bus 0 is output to hardware. Trying to connect to remote io input scope bus 3 is probably going to be another error
Suggest you roll back to when you had audio pass-through working. That would mean you had just remoteIO, and a connection from output scope / bus 1 to input scope / bus 0.
Then create the filter unit. Change your connections so you connect:
remoteIO output scope bus 1 to filter input scope bus 0
filter output scope bus 0 to remoteIO input scope bus 0
The other thing that's going to be a problem is that all these iOS 5 filters seem to want to use floating-point LPCM formats, which is not the canonical format your other units will default to. You may have to get the stream format from the filter unit (input or output scope are probably the same?) and then set that as the format that remoteIO output scope / bus 1 produces and remoteIO input scope / bus 0 accepts. Another option would be to introduce AUConverter units before and after the filter unit.
The first answer given here just saved me a lot more frustration. No where does the Apple documentation tell you that the file formats for the Effect units require floating point. I couldn't figure out why it kept failing to play my audio properly until I read this post. I followed the advice above and retrieved the stream format from the low pass filter unit, and used that to set up two converter units that I created (ie. set the output format of the pre filter converter, and the input format of the post filter converter. Once I did that and connected all the nodes together it started working as expected.
im trying to use a low pass filter and when trying to do as suggested aka set the format i keep getting an error "the operation could not be completed" what in this code is faulty?
After retrieving the lowpassUnit I also check for errors but there are none.
result = AudioUnitSetProperty(lowpassUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &stereoStreamFormat, sizeof (stereoStreamFormat));
if (noErr != result)
{
NSLog(#"%#", [NSError errorWithDomain:NSOSStatusErrorDomain code:result userInfo:nil]);
return;
}
PS: If anyone knows of proper Audio unit documentation please share as the official documentation is really lacking

DirectShow's PushSource filters cause IMediaControl::Run to return S_FALSE

I'm messing around with the PushSource sample filter shipped with the DirectShow SDK and I'm having the following problem:
When I call IMediaControl::Run(), it returns S_FALSE which means "the graph is preparing to run, but some filters have not completed the transition to a running state". MSDN suggests to then call IMediaControl::GetState() and wait for the transition to finish.
And so, I call IMediaControl::GetState(INFINITE, ...) which is supposed to solve the problem.
However, to the contrary, it returns VFW_S_STATE_INTERMEDIATE even though I've specified an infinite waiting time.
I've tried all three variations (Bitmap, Bitmap Set and Desktop) and they all behave the same way, which initially lead me to believe there is a bug in there somewhere.
However, then, I tried using IFilterGraph::AddSourceFilter to do the same and it did the same thing, which must mean it's my rendering code that is the problem:
CoInitialize(0);
IGraphBuilder *graph = 0;
assert(S_OK == CoCreateInstance(CLSID_FilterGraph, 0, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void**)&graph));
IBaseFilter *pushSource = 0;
graph->AddSourceFilter(L"sample.bmp", L"Source", &pushSource);
IPin *srcOut = 0;
assert(S_OK == GetPin(pushSource, PINDIR_OUTPUT, &srcOut));
graph->Render(srcOut);
IMediaControl *c = 0;
IMediaEvent *pEvent;
assert(S_OK == graph->QueryInterface(IID_IMediaControl, (void**)&c));
assert(S_OK == graph->QueryInterface(IID_IMediaEvent, (void**)&pEvent));
HRESULT hr = c->Run();
if(hr != S_OK)
{
if(hr == S_FALSE)
{
OAFilterState state;
hr = c->GetState(INFINITE, &state);
assert(hr == S_OK );
}
}
long code;
assert(S_OK == pEvent->WaitForCompletion(INFINITE, &code));
Anyone knows how to fix this?
IBaseFilter *pushSource = 0;
graph->AddSourceFilter(L"sample.bmp", L"Source", &pushSource);
AddSourceFilter adds a default source filter, I don't think it will add your pushsource samplefilter.
I would recommend to add the graph to the ROT, so you can inspect it with graphedit.
And what happens if you don't call GetState()?
hr = pMediaControl->Run();
if(FAILED(hr)) {
/// handle error
}
long evCode=0;
while (evCode == 0)
{
pEvent->WaitForCompletion(1000, &evCode);
/// other code
}
Open GraphEditPlus, add your filter, render its pin and press Run. Then you'll see states of each filter separately, so you'll see what filter didn't run and why.