Char not being saved to file correctly - c++-cli

I want to save the contents of a ComboBox to a file. The code below correctly shows a MessageBox with "Marker 4" (the text in the ComboBox), but the saved file contains "03038D8C" instead of "Marker 4", I guess this is the memory address of the variable or something similar? How can I correctly output the "Marker 4"-string to the file?
private: System::Windows::Forms::ComboBox^ cmbMarker;
private: System::String^ strMarkerText;
...
strMarkerText = this->cmbMarker->Text;
...
ofstream myfile;
WIN32_FIND_DATA data;
pin_ptr<const wchar_t> wname = PtrToStringChars(strMarkerText);
FindFirstFile(wname, &data);
::MessageBox(0, wname, L"Marker inserted", MB_OK);
myfile <<"=====MARKER '" << wname << "' INSERTED AT " << datetime << " =====" << endl;
[There may be more than this wrong with this snippet, I'm not from a C++/CLI background but appreciate your help! There are no compiler errors and the code runs fine except for the issue described above, i.e. that not the cleartext-string-contents are written to the file ("Marker 4"), but "03038D8C".]
Thanks,
Nick

The problem is that you're using a narrow stream with a wide string. Use std::wofstream instead of std::ofstream and it should work fine.
That being said, I agree with #jonsca -- why drag iostreams into a C++/CLI app?

I'm not sure of your application's requirements, but have you considered using the .NET "equivalents" of the functions, like System::IO::Directory methods (specifically GetFiles in place of FindFirstFile), and the System::IO::StreamWriter in place of the ofstream object? This way the code in this section jives with the CLR portion of your code.
I know that's not precisely what you were asking for, but I have a feeling that the pointer in your code might need to be handled differently, and I'm unsure if would have to be marshaled across the managed/unmanaged barrier.

I ended up converting the System::String^ to a std:str and directly inserted this (rather than converting it to a wchar_t).
The mix between native c++ and CLI is because I'm building on a SDK-example built in native c++, but wanted to add a form to it (in Visual Studio 2008), which turned it into this "mix". I realize that that's not optimal, but so far, it seems to work :-)! I'll try to only use CLI-equivalents if I run into any more errors going forward. Thanks for your help!

Related

Error: No operator "=" matches these operands in "Servo_Project.cpp", Line: 15, Col: 22

So I tried using code from another post around here to see if I could use it, it was a code meant to utilize a potentiometer to move a servo motor, but when I attempted to compile it is gave the error above saying No operator "=" matches these operands in "Servo_Project.cpp". How do I go about fixing this error?
Just in case ill say this, the boards I was trying to compile the code were a NUCLEO-L476RG, the board from the post I mentioned utilized Nucleo L496ZG board and a Tower Pro Micro Servo 9G.
#include "mbed.h"
#include "Servo.h"
Servo myservo(D6);
AnalogOut MyPot(A0);
int main() {
float PotReading;
PotReading = MyPot.read();
while(1) {
for(int i=0; i<100; i++) {
myservo = (i/100);
wait(0.01);
}
}
}
This line:
myservo = (i/100);
Is wrong in a couple of ways. First, i/100 will always be zero - integer division truncates in C++. Second, there's not an = operator that allows an integer value to be assigned to a Servo object. YOu need to invoke some kind of Servo method instead, likely write().
myservo.write(SOMETHING);
The SOMETHING should be the position or speed of the servo you're trying to get working. See the Servo class reference for an explanation. Your code tries to use fractions from 0-1 and thatvisn't going to work - the Servo wants a position/speed between 0 and 180.
You should look in the Servo.h header to see what member functions and operators are implemented.
Assuming what you are using is this, it does have:
Servo& operator= (float percent);
Although note that the parameter is float and you are passing an int (the parameter is also in the range 0.0 to 1.0 - so not "percent" as its name suggests - so be wary, both the documentation and the naming are poor). You should have:
myservo = i/100.0f;
However, even though i / 100 would produce zero for all i in the loop, that does not explain the error, since an implicit cast should be possible - even if clearly undesirable. You should look in the actual header you are using to see if the operator= is declared - possibly you have the wrong file or a different version or just an entirely different implementation that happens to use teh same name.
I also notice that if you look in the header, there is no documentation mark-up for this function and the Servo& operator= (Servo& rhs); member is not documented at all - hence the confusing automatically generated "Shorthand for the write and read functions." on the Servo doc page when the function shown is only one of those things. It is possible it has been removed from your version.
Given that the documentation is incomplete and that the operator= looks like an after thought, the simplest solution is to use the read() / write() members directly in any case. Or implement your own Servo class - it appears to be only a thin wrapper/facade of the PwmOut class in any case. Since that is actually part of mbed rather than user contributed code of unknown quality, you may be on firmer ground.

How to read byte data from a StorageFile in cppwinrt?

A 2012 answer at StackOverflow (“How do I read a binary file in a Windows Store app”) suggests this method of reading byte data from a StorageFile in a Windows Store app:
IBuffer buffer = await FileIO.ReadBufferAsync(theStorageFile);
byte[] bytes = buffer.ToArray();
That looks simple enough. As I am working in cppwinrt I have translated that to the following, within the same IAsyncAction that produced a vector of StorageFiles. First I obtain a StorageFile from the VectorView using theFilesVector.GetAt(index);
//Then this line compiles without error:
IBuffer buffer = co_await FileIO::ReadBufferAsync(theStorageFile);
//But I can’t find a way to make the buffer call work.
byte[] bytes = buffer.ToArray();
“byte[]” can’t work, to begin with, so I change that to byte*, but then
the error is “class ‘winrt::Windows::Storage::Streams::IBuffer’ has no member ‘ToArray’”
And indeed Intellisense lists no such member for IBuffer. Yet IBuffer was specified as the return type for ReadBufferAsync. It appears the above sample code cannot function as it stands.
In the documentation for FileIO I find it recommended to use DataReader to read from the buffer, which in cppwinrt should look like
DataReader dataReader = DataReader::FromBuffer(buffer);
That compiles. It should then be possible to read bytes with the following DataReader method, which is fortunately supplied in the UWP docs in cppwinrt form:
void ReadBytes(Byte[] value) const;
However, that does not compile because the type Byte is not recognized in cppwinrt. If I create a byte array instead:
byte* fileBytes = new byte(buffer.Length());
that is not accepted. The error is
‘No suitable constructor exists to convert from “byte*” to “winrt::arrayView::<uint8_t>”’
uint8_t is of course a byte, so let’s try
uint8_t fileBytes = new uint8_t(buffer.Length());
That is wrong - clearly we really need to create a winrt::array_view. Yet a 2015 Reddit post says that array_view “died” and I’m not sure how to declare one, or if it will help. That original one-line method for reading bytes from a buffer is looking so beautiful in retrospect. This is a long post, but can anyone suggest the best current method for simply reading raw bytes from a StorageFile reference in cppwinrt? It would be so fine if there were simply GetFileBytes() and GetFileBytesAsync() methods on StorageFile.
---Update: here's a step forward. I found a comment from Kenny Kerr last year explaining that array_view should not be declared directly, but that std::vector or std::array can be used instead. And that is accepted as an argument for the ReadBytes method of DataReader:
std::vector<unsigned char>fileBytes;
dataReader.ReadBytes(fileBytes);
Only trouble now is that the std::vector is receiving no bytes, even though the size of the referenced file is correctly returned in buffer.Length() as 167,513 bytes. That seems to suggest the buffer is good, so I'm not sure why the ReadBytes method applied to that buffer would produce no data.
Update #2: Kenny suggests reserving space in the vector, which is something I had tried, this way:
m_file_bytes.reserve(buffer.Length());
But it didn't make a difference. Here is a sample of the code as it now stands, using DataReader.
buffer = co_await FileIO::ReadBufferAsync(nextFile);
dataReader = DataReader::FromBuffer(buffer);
//The following line may be needed, but crashes
//co_await dataReader.LoadAsync(buffer.Length());
if (buffer.Length())
{
m_file_bytes.reserve(buffer.Length());
dataReader.ReadBytes(m_file_bytes);
}
The crash, btw, is
throw hresult_error(result, hresult_error::from_abi);
Is it confirmed, then, that the original 2012 solution quoted above cannot work in today's world? But of course there must be some way to read bytes from a file, so I'm just missing something that may be obvious to another.
Final (I think) update: Kenny's suggestion that the vector needs a size has hit the mark. If the vector is first prepared with m_file_bytes.assign(buffer.Length(),0) then it does get filled with file data. Now my only worry is that I don't really understand the way IAsyncAction is working and maybe could have trouble looping this asynchronously, but we'll see.
The array_view bridges the gap between Windows APIs and C++ array types. In this example, the ReadBytes method expects the caller to provide some array that it can copy bytes into. The array_view forwards a pointer to the caller's array as well as its size. In this case, you're passing an empty vector. Try resizing the vector before calling ReadBytes.
When you know how many bytes to expect (in this case 2 bytes), this worked for me:
std::vector<unsigned char>fileBytes;
fileBytes.resize(2);
DataReader reader = DataReader::FromBuffer(buffer);
reader.ReadBytes(fileBytes);
cout<< fileBytes[0] << endl;
cout<< fileBytes[1] << endl;

Input setting using Registers

I have a simple c program for printing n Fibonacci numbers and I would like to compile it to ELF object file. Instead of setting the number of fibonacci numbers (n) directly in my c code, I would like to set them in the registers since I am simulating it for an ARM processor.How can I do that?
Here is the code snippet
#include <stdio.h>
#include <stdlib.h>
#define ITERATIONS 3
static float fib(float i) {
return (i>1) ? fib(i-1) + fib(i-2) : i;
}
int main(int argc, char **argv) {
float i;
printf("starting...\n");
for(i=0; i<ITERATIONS; i++) {
printf("fib(%f) = %f\n", i, fib(i));
}
printf("finishing...\n");
return 0;
}
I would like to set the ITERATIONS counter in my Registers rather than in the code.
Thanks in advance
The register keyword can be used to suggest to the compiler that it uses a registers for the iterator and the number of iterations:
register float i;
register int numIterations = ITERATIONS;
but that will not help much. First of all, the compiler may or may not use your suggestion. Next, values will still need to be placed on the stack for the call to fib(), and, finally, depending on what functions you call within your loop, code in the procedure are calling could save your register contents in the stack frame at procedure entry, and restore them as part of the code implementing the procedure return.
If you really need to make every instruction count, then you will need to write machine code (using an assembly language). That way, you have direct control over your register usage. Assembly language programming is not for the faint of heart. Assembly language development is several times slower than using higher level languages, your risk of inserting bugs is greater, and they are much more difficult to track down. High level languages were developed for a reason, and the C language was developed to help write Unix. The minicomputers that ran the first Unix systems were extremely slow, but the reason C was used instead of assembly was that even then, it was more important to have code that took less time to code, had fewer bugs, and was easier to debug than assembler.
If you want to try this, here are the answers to a previous question on stackoverflow about resources for ARM programming that might be helpful.
One tactic you might take is to isolate your performance-critical code into a procedure, write the procedure in C, the capture the generated assembly language representation. Then rewrite the assembler to be more efficient. Test thoroughly, and get at least one other set of eyeballs to look the resulting code over.
Good Luck!
Make ITERATIONS a variable rather than a literal constant, then you can set its value directly in your debugger/simulator's watch or locals window just before the loop executes.
Alternatively as it appears you have stdio support, why not just accept the value via console input?

Crypto++ AES Decrypt how to?

There are next to no noob guides to crypto++ out there. Or none that I've found anyway. What I want to do is decrypt an array of uchars I generate with another AES encrypter. Where would I start? I have the library built and linking grand. Do I need to set anything up or do I just call a function on my array (and if so what function) ?
I'd really appreshiate some help from someone who knows this stuff.
Thanks
I wouldn't say I "know my stuff" too much about this, but here's some test code I put together to encrypt/decrypt strings with AES. Extending this to use some other data shouldn't be too hard.
string output;
CTR_Mode<AES>::Encryption encrypt((const byte*)key,AES::DEFAULT_KEYLENGTH,(const byte*)iv);
StringSource(plaintext, true, new StreamTransformationFilter(encrypt, new StringSink(output)));
cout << "Encrypted: " << output << endl;
string res;
CTR_Mode<AES>::Decryption decrypt((const byte*)key,AES::DEFAULT_KEYLENGTH,(const byte*)iv);
StringSource(output, true, new StreamTransformationFilter(decrypt, new StringSink(res)));
cout << "Decrypted: " << res << endl;
While working on this, I found the source code in the Crypto++ test program (the VisualStudio project called "cryptest") to be a big help. It was a little tough to read at first, but it gets easier as you work with it. I also got a lot of help understanding the available block cipher modes from Wikipedia (http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation).
Here's a couple of resources from a Google search:
http://www.bitvise.com/users-guide.html
http://andreyvit.livejournal.com/37576.html

Calling functions from within function(float *VeryBigArray,long SizeofArray) from within objC method fails with EXC_BAD_ACCESS

Ok I finally found the problem. It was inside the C function(CarbonTuner2) not the objC method. I was creating inside the function an array of the same size as the file size so if the filesize was big it created a really big array and my guess is that when I called another function from there, the local variables were put on the stack which created the EXC_BAD_ACCESS. What I did then is instead of using a variable to declare to size of the array I put the number directly. Then the code didnt even compile. it knew. The error wassomething like: Array size too big. I guess working 20+hours in a row isnt good XD But I am definitly gonna look into tools other than step by step debuggin to figure these ones out. Thanks for your help. Here is the code. If you divide gFileByteCount by 2 you dont get the error anymore:
// ConverterController.h
# import <Cocoa/Cocoa.h>
# import "Converter.h"
#interface ConverterController : NSObject {
UInt64 gFileByteCount ;
}
-(IBAction)ProcessFile:(id)sender;
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp);
#end
// ConverterController.m
# include "ConverterController.h"
#implementation ConverterController
-(IBAction)ProcessFile:(id)sender{
UInt32 packets = gTotalPacketCount;//alloc a buffer of memory to hold the data read from disk.
gFileByteCount=250000;
long LENGTH=(long)gFileByteCount;
CarbonTuner2(LENGTH,(long)8192/2, (long)4*2);
}
#end
void CarbonTuner2(long numSampsToProcess, long fftFrameSize, long osamp)
{
long numFrames = numSampsToProcess / fftFrameSize * osamp;
float g2DFFTworksp[numFrames+2][2 * fftFrameSize];
double hello=sin(2.345);
}
Your crash has nothing to do with incompatibilities between C and ObjC.
And as previous posters said, you don't need to include math.h.
Run your code under gdb, and see where the crash happens by using backtrace.
Are you sure you're not sending bad arguments to the math functions?
E.g. this causes BAD_ACCESS:
double t = cos(*(double *)NULL);
Objective C is built directly on C, and the C underpinnings can and do work.
For an example of using math.h and parts of standard library from within an Objective C module, see:
http://en.wikibooks.org/wiki/Objective-C_Programming/syntax
There are other examples around.
Some care is needed around passing the variables around; use the C variables for the C and standard library calls; don't mix the C data types and Objective C data types incautiously. You'll usually want a conversion here.
If that is not the case, then please consider posting the code involved, and the error(s) you are receiving.
And with all respect due to Mr Hellman's response, I've hit errors when I don't have the header files included; I prefer to include the headers. But then, I tend to dial the compiler diagnostics up a couple of notches, too.
For what it's worth, I don't include math.h in my Cocoa app but have no problem using math functions (in C).
For example, I use atan() and don't get compiler errors, or run time errors.
Can you try this without including math.h at all?
First, you should add your code to your question, rather than posting it as an answer, so people can see what you're asking about. Second, you've got all sorts of weird problems with your memory management here - gFileByteCount is used to size a bunch of buffers, but it's set to zero, and doesn't appear to get re-set anywhere.
err = AudioFileReadPackets (fileID,
false, &bytesReturned, NULL,0,
&packets,(Byte *)rawAudio);
So, at this point, you pass a zero-sized buffer to AudioFileReadPackets, which prompty overruns the heap, corrupting the value of who knows what other variables...
fRawAudio =
malloc(gFileByteCount/(BITS/8)*sizeof(fRawAudio));
Here's another, minor error - you want sizeof(*fRawAudio) here, since you're trying to allocate an array of floats, not an array of float pointers. Fortunately, those entities are the same size, so it doesn't matter.
You should probably start with some example code that you know works (SpeakHere?), and modify it. I suspect there are other similar problems in the code yoou posted, but I don't have time to find them right now. At least get the rawAudio buffer appropriately-sized and use the values returned from AudioFileReadPackets appropriately.